When the University of Illinois' National Center for Supercomputing Applications set out to build a machine with more than 200,000 server cores, the key wasn't simply shelling out cash for newer, faster silicon chips. The trick was harnessing the power of a substance that comes right out of your kitchen sink: water.
Using water to cool servers isn't a new idea, but it is gaining new converts at a time when fears of global warming and rising energy costs are making data center operators and server vendors search for ways to increase efficiency.
To Rob Pennington, deputy director of the NCSA, water cooling offers one huge advantage: power density. The NCSA's planned Blue Waters petascale computing machine will fit more than 200,000 cores in a space that's about twice the size of a current NCSA machine that has 9,600 cores, according to Pennington.
"Water cooling makes it possible," Pennington says. "If we had to do air cooling, we'd be limited by how much air can be blown up through the floor."
Blue Waters will be operational in 2011 and will likely use servers based on IBM's future Power7 chips.
Water cooling is inherently more efficient than air conditioning, Pennington says. That efficiency is being exploited to greater effect with today's multicore processors and multisocket motherboards. When a motherboard had one socket a decade or so ago, the advantage of water cooling didn't mean as much as it does today, when you're typically trying to cool four sockets on the motherboard, he says.
NEC, using Intel Pentium processors, began selling a water-cooled server at the end of 2005. IBM is just returning to water cooling servers after not using the technique since 1995. Big Blue abandoned water cooling after shipping its last bipolar mainframe with CMOS (complementary metal--oxide--semiconductor) technology, according to Ed Seminaro, chief system architect for IBM's Power Systems.
"We actually went from a product that used almost 200 kilowatts of power down to a product that could basically satisfy the same function with about 5,000 watts," Seminaro says. "That's why we didn't need water cooling anymore. There was far less power required and far less heat density."
Times have changed. Last month, IBM added what it calls a hydro-cluster water cooling system to its System p5 575 supercomputer. As the number of transistors on a chip increased over the past decade, IBM wasn't always able to keep power usage steady. So it turned to water cooling with an innovative design that brings water almost right up to the chip.
Why is water so efficient? Because heat from servers eventually gets transferred to water anyway, even in data centers cooled by big chiller air conditioning systems, says Jud Cooley, senior director of engineering for a Sun Microsystems water-cooled product known as the Modular Datacenter. With computer room air conditioning systems, chillers are placed by the racks, and from the resulting hot air, heat is moved into liquid and pumped outside the building, Cooley says.