TechWorld

Decision on world’s largest radio telescope imminent

An in-depth interview with CSIRO SKA director, Dr Brian Boyle
Dr Brian Boyle, CSIRO SKA director

Dr Brian Boyle, CSIRO SKA director

Next month, the Australasian SKA Consortium is likely to find out whether Australia and New Zealand will host the Square Kilometre Array — the world’s biggest radio telescope.

The SKA project involves the construction of a radio telescope 50 times larger than any other. It includes some 3000 antennae across about 5500 kilometres in Australia and New Zealand, each of which will send data to a central supercomputer about 100 times faster than Japan’s ‘K Computer’ — currently considered the most powerful computational behemoth around.

The countries are shortlisted, along with South Africa, to host the global facility and a decision on the project is imminent. Regardless of the decision, the Commonwealth Scientific Industrial Research Organisation (CSIRO) will continue with the construction of an associated project, the Australian Square Kilometre Array Pathfinder, a new radio telescope currently being built at the Murchison Radio-astronomy Observatory in the mid-west region of Western Australia. The team has already used a supercomputer at the new iVEC Pawsey Centre in Perth to simulate how data will be processed to create images of the ‘radio sky’.

Download the CIO Australia iPad app CIO Australia for iPad

CSIRO SKA director, Dr Brian Boyle, heads up this mammoth task. He was director of the Anglo-Australian Observatory from 1996 to 2003 and the CSIRO Australia Telescope National Facility from 2003 to 2009 and has published more than 300 papers in astronomy. He sat down with CIO Australia and Computerworld Australia to talk about the Square Kilometre Array and its implications, both in Australia and globally.

We’ve been following the Square Kilometre Array project with interest at Computerworld and CIO, but could you give us a brief background on your plans?

Our plan is for construction over the period beginning from the middle of this decade through into the next decade. It will be 50 times larger than the other radio telescope, with 3000 antennas going across a continent or more than 5000 kilometres from Australia to New Zealand. All those antennas, and the data from all those antennas, will be brought together in a central supercomputer.

The data transport is effectively speeds in excess of the current world’s entire global internet traffic processed by a supercomputer that is about 100 times faster than the fastest supercomputer on the planet.

How can you plan for technology developments on that sort of timescale?

Well, IBM tell me that the exaflop computer is going to be commissioned in 2018 and it will be a Thursday!

As an aside, I was at a very important moment in societal history — it was actually just outside of Harvard when I watched Watson win Jeopardy. I was staying with a bunch of IBM people who were on a retreat and you know I've been at parties when Scotland won the grand slam in rugby, but this one ran it close.

It’s really all about leading edge in terms of driving where we’re going in computing and a lot of our partners — the IBMs, the HPs — are really interested in having an ‘out there’ problem to solve. And it’s not just a problem, of course, in processing; it’s the problem in data transmission, data storage, data access and the whole issue around turning data into information [then] into knowledge.

It was T.S. Elliot who said, “Where is the wisdom we have lost in knowledge and where is the knowledge we have lost in information?”

It’s a wonderful quote and it was said right back in the 1920s. So this whole knowledge data pipeline is incredibly important, not just for radio astronomy, but for everything.

Every sensor network on the planet is going to be taking information about climate, oceans, health and we have to be able to cope with that — drinking from the data fire hydrant.

But we have to power it all as well, so there is a big area of renewable energy for sustainable computing in the longer term. We’re looking at ways in which you can power the computing and handle the data. That will lead to different computing designs.

The SKA itself will also look back to the beginning of time; the finite speed of light means that as you look further out in the universe you’re looking further back in time, so we can trace the history of cosmic evolution after the SKA. We can look for very weird astrophysical objects that give off radio signatures such as pulsars and quasars, really exotic physical objects where the gravitational field is 50,000 times that of the sun and, fundamentally, test the theories of Einstein, Hawking and Penrose to their limit.

Where is the project at right now?

We’re on the pointy end at the moment. The international project is just about to go into the stage called ‘pre-construction’. All the designs to date have been done in conceptual way but we now have to get serious and deliver up the drawings that industry can take away and build. That phase of the project will take four years and it will cost 90 million Euros. (I'm quoting all the numbers in Euros because we figured that was the most stable currency about five years ago. Who knew?)

Most of the money is on the table at the moment and I think that speaks volumes for the strength and the vision of a project — that we can still attract international investors and those of the governments around the world to put money into a project of this nature.

At the same time, in February, March 2012, an international entity will make a decision on the site. Australia and New Zealand, together with South Africa and other African countries, have been shortlisted as the potential site of the Square Kilometre Array. The evaluation process is merit-based, so it’s a bit like a tender.

Who is funding the project, given the economic uncertainty in Europe?

The UK, the Netherlands, Canada, Germany, Italy, South Africa, Australia, New Zealand and China are the key countries at the moment who are involved.

Of course Italy’s financial position concerns us generally. But is it going to make a big impact on the overall project? Not as far as we can determine at the moment.

How much is Australia putting into that kitty?

Australia’s contribution is dependent on the outcome of the site. The host has to pay a premium for the telescope, so Australia will put in of the order of $5 million if it is not the site and $34 million if it is the site.

And how much will the New Zealand government put in?

New Zealand is looking at of the order of a few million — 10 per cent in terms of the GDP ratio, a commensurate sort of investment level. New Zealand has already made investment in infrastructure that is currently at core site of our SKA project in the Murchison region. New Zealand have invested in supercomputing for a telescope called the Murchison Widefield Array.

There are two major telescopes on the [Murchinson] site. There is the Australian SKA Pathfinder, which is being built by the CSIRO, 36 dishes equipped with a very new form of radio camera that we’re building. It’s a mini SKA if you like. There is also another telescope called the Murchison Widefield Array, which is a coordination between Australia, India, the US and New Zealand, and is designed to look at lower frequency radio waves down in the FM band. It looks a bit sort of like TV or radio antennas from the top, and at the back end, a supercomputer processes all the information. New Zealand has already invested in that particular project.

What happens if Australia and New Zealand are not chosen as the SKA project site?

Australia still intends to play a role in the project and we’ll still contribute to the international project at the level we’ve indicated. We will continue to operate the Australian SKA Pathfinder because, when it’s commissioned in 2013, it will be the world’s fastest survey telescope, and it will be able to do surveys of the local universe that are unprecedented in scale and scope.

So, it will still be a valuable resource, and then there are the investments and things like the Pawsey High Performance Computing Centre for SKA sites — a petaflop supercomputer that will be used to process the data from ASKAP and indeed the Murchison Widefield Array. We’ll continue on with delivering that radio astronomy facility to the world and continue to play a role. It’s unlikely to be as significant a role in the international SKA project had Australia been awarded the site.

How does the Australian bid stack up against South Africa?

I can’t make comparative comments. I can say that we’re very supportive of this merit based approach towards getting to a recommendation on the SKA site.

I can talk about the important aspects, but that doesn’t mean they’re not just as strong in the African case. So, I’ll predicate that. What are the important aspects in the Australian bid? Well it’s the radio quietness at the Murchison Radio Astronomy Observatory site. We’re right in the middle of the shire of Murchison, which is a shire of 50,000 square kilometres. That’s 25 per cent larger than the Netherlands, yet it has a population of 110. That’s two nanopeople per square metre.

It’s a fairly isolated spot, but nevertheless has a significant amount of infrastructure already in place; it has a fibre optic connection from that site all the way to the Perth supercomputer for SKA sites, and Australia and New Zealand have a very extensive broadband research network. In Australia, it’s supported by AARNet… and in New Zealand it’s REANNZ.

So, we have a broadband network that’s already capable of delivering gigabit per second speeds at affordable rates and we would imagine that this network would grow in capacity — we’re planning for it to grow in research capacity.

We’ll get to 40 gigabit for ASKAP by the end of next year and then grow it … In the end, we need hundreds of terabytes. Is that possible in the timescale that we’re looking at for the SKA? Yes, it is.

We also have the ability to protect our ‘radio quietness’. We not only have legislation in place through the Australia Communications, the Media Authority Radio Communications Act… but also through implementation, so that we can work with other stakeholders in the region. There may only be two nanopeople, but we also have other industry — not locally — but we have mining and pastoral activity.

There are no mines within 70 kilometres of the site, but it’s very important that we work with the communities so that we can absolutely protect the radio quietness of the telescope and still allow other stakeholders to conduct appropriate business. In every case, we have found a technical solution to any problems with radio frequency interference that might occur.

Is geological stability an issue at all?

It is an issue and of course the whole of Australia is very geologically stable. You’re going to ask me about New Zealand, aren’t you? The one potential site is near Invercargill which is the least geologically active of all of New Zealand.

There are a host of requirements we have to cover off in responding to the requests for information. They also include the physical characteristics — the rock and soil type, the ability to anchor concrete footings, the number of lightning [strikes] and thunderstorms, rain events, dust — all the things you would expect and need to know if you were going to be rolling out a big telescope. You also want to make sure the site is as flat as possible so you don’t have a huge overhead in building antennae.

There is also, of course, the cost of data communications, data transfer and that sort of thing [as well as the] cost of roads, buildings and power for the telescope. Then there are the classic socioeconomic factors: What are the working conditions, the security environment and the financial environment?

Page Break

You touched on data storage and processing. Just how big are the processing and storage challenges?

They’re phenomenal. We’re talking about exabytes of raw data coming off of telescopes. Now there will be an element of data reduction at all stages, so at each telescope you will reduce the data a little bit and each of the telescopes will go to a central processing point where we will reduce the data further — at least in the core and then all the various satellite stations around it — and then all that data across Australia and New Zealand will then go to the central processor, the exaflop machine, which actually forms the images of the telescope.

Are you considering discarding much of the data?

That’s one of the possible ways of doing it. There is a real balancing act between how much you can process, how much you can store and how much you [can] transport. And, to some level, it depends where you are in all those technologies. How expensive are exabyte storage solutions in 2020? How expensive is the exaflop computer? What is the power requirement for each of those? At the moment, we are looking to actually lose a lot of data — up to 99 per cent — but even losing 99 per cent of the data you’re still talking about 10 petabytes minimum a day of data to be stored, which is a remarkable amount.

How do you work out what is redundant and what is useful?

That’s part of the process — to be able to take all that data. A lot is what we would call visibilities and Fourier space. Radio telescopes do things in quite a funny way because when you’re looking at the sky with a radio telescope it’s not like an optical telescope where you’ve got one big mirror. In a radio telescope, you’ve got lots of different mirrors and the image actually forms by allowing the sky to rotate above you, so you actually fill in the gaps between all the antennae. So, you have to process it in what we call Fourier space. It’s a geometrical transformation of normal X,Y space.

What are we going to lose? Are we going to lose real data and real information? We would hope that most of the stuff that we lose is essentially blank sky or blank information. We cannot rule out, of course, something happening. You might throw away data that over a certain time average period was zero, but it was actually something happened. A blip happened in it like a supernova or a big exploding star going off that you didn’t pick up.

So, what we will try to do is keep as much of the data for as long as possible so that if we realised perhaps from another telescope something had happened that we hadn’t seen we could recover it from the data and then resurrect it later on. But the actual volumes of the data are so huge [that] our ability to store every single last piece of information from this constant data stream is going to be limited to about 30 seconds at the moment.

It is like drinking from a data fire hydrant. The real challenge is around the processor to cope with this phenomenal data flow.

With that kind of processing, how would power and cooling work? Can you use solar energy?

The central site for the full SKA is going to be 60 megawatts.

Now, you have a few options: You could build out the grid or you could have your own renewable source of energy or your own diesel energy. But powering a telescope with 60 megawatts with a diesel power generator is going to be pretty dirty, not to mention very expensive. So certainly, the international project team are looking at ways of which we can increase the renewable penetration.

What’s it going to be? This is a telescope that is operating 24/7 and solar power is not 24/7. So you have to think of efficient ways of either having base load power. It could be geothermal. It could be wind. Or, you have significant battery storage or biodiesel backup or you could look at things like the provision of gas-fired power stations.

Lots of the work that we’re doing at the moment is looking into the best possible energy supply for this telescope. And the cost trails are interesting because you’re building a telescope in 2020. What is going to be a better solution? Where are the costs? In the same way as we’re looking at processing versus storage versus data transport we’re looking at energy generation versus storage versus distribution. They all have different cost curves in the way that they’re growing.

Right now, I don’t know the answer because these things are quite variable. We’re working on it — the CSIRO has its own renewable energy teams looking at this. We’re also working with people like the Fraunhofer Institute for renewable energy systems in Germany and with places like the Boeing company, all of whom are looking at this big challenge of how to power very large pieces of infrastructure.

There is also the generation of storage and the demand side, so you’re also looking at how to make computers much more efficient. If you scale up today’s current high-end computers, we wouldn’t just require 60 megawatts — it would be gigawatts.

Those are big challenges: Does Moore’s Law keep on going or are we going to be stymied, not by computer architecture, but by our ability to power the damn things. And cooling becomes an increasingly large factor.

Did you have to factor in the Carbon Tax when you planned your submission?

We factored in the cost of power, based on the advice of the consultant engineers involved in the overall project. They would have made some allowance for the appropriate carbon tax.

Are you considering distributed computing options? You have the SkyNet, which runs on fixed datasets. Could you open it up globally and ask people to donate processing power and storage?

The trouble is that the torrent is so great that the process of farming it out to individual computers and then getting it back would give us an even greater headache than having the computer in one place. The distribution of data would be the real challenge. At the moment, given the nature of the data stream, it’s actually quite difficult to do. If we could parse it out more discretely and more effectively then it’s a possibility, but our current preferred option is a single data stream.

[Having said that, we are also looking at] the provision of broadband data links around the world because people will want to have real-time access to these reduced datasets that we provide from the central supercomputer and we’re confident that will be able to provide people with the appropriate level of access.

One of the biggest headaches for project managers is delivering value when the outcomes are unknown. How does that work in the context of the SKA?

We need to have very clear requirements for the SKA, but equally, I don’t know the universe well enough to say what those technical requirements, when they’re delivered, will actually discover. If we knew what we were looking for there would be no point in building the telescope. Astronomy is, fundamentally, an observational science, so you’re only constructing the next biggest thing because antiques should be on the boundaries of where you were.

Columbus knew how to build his boat. He didn’t know what he was going to discover; he just knew that it would give him the vehicle to discover it. When he went the Queen Isabella he knew exactly how to provision himself for the journey, what it would cost, how he had to outfit the boat and who he needed on the boat. That’s a bit like the SKA.

We have to be clear about the technical requirements. We have to be clear about who is going to build it and we have to be able to resource that. Once we have all that, we deliver to the requirements — and then we embark on our voyage of discovery.

So what has been one of the biggest technical challenges of the Australian SKA Pathfinder (ASKAP) program to date?

In the past, radio telescopes operated by having a single pixel camera at the focus and combining all the information together to generate an image. Now, there have been some experiments to increase the number of pixels that you use, so you increase the amount of sky you look at simultaneously. It’s kind of like a panoramic photograph with more megapixels or better resolution on your camera, but those sort of 10-pixel cameras have been patchy coverage rather than contiguous pixels.

So at ASKAP, we decided to build a 100-pixel camera that is contiguous. It’s like a very, very big CCD in the back of your camera.

The trouble is radio waves are much bigger than the optical waves. Correspondingly, your cameras are the size of a 44 gallon oil drum! Also, because of the electromagnetics of radio waves, they interfere with each other and give you horrible problems at the boundaries. That is why you don’t have sort of contiguous detectors.

However, we managed to attract a chap back to the CSIRO called John O’Sullivan, who was the leader of the team that did wireless, did 802.11. He thought something like this was one of the biggest engineering challenges you could face and we recently released on our website our first images of our radio galaxy taken with the ASKAP radio camera. We’ve cracked the problem of delivering a radio camera that is sensitive enough to do radio astronomy.

From a project management perspective, this is our biggest risk and so I'm really delighted. Scientists dream dreams and engineers make them happen and I've never lost money yet by banking on a good engineer.

Tim Lohman contributed to this interview

Download the CIO Australia iPad app CIO Australia for iPad