NetApp manages to keep its cool
- 23 April, 2008 09:41
With its 7,000-square-foot datacenter nearing capacity, NetApp decided last year to squeeze more out of the space. By consolidating servers and replacing older hardware, the company created an energy-efficient, high-density facility with superior server utilization. That, in and of itself, is a worthy sustainable-tech project, but special kudos go to NetApp's unsung heroes who dealt with the upgraded datacenter's dirty little secret -- a whole lot of hot air.
NetApp, a Silicon Valley-based maker of storage and data management products, originally built its datacenter in 2004. In order to prevent servers from overheating, the basic design had racks arranged in hot-and-cold aisles for collecting and expelling hot air, as well as air handlers equipped with economizers.
"Our design in 2004 was considered somewhat leading edge," says Ralph Renne, site operations manager at NetApp, adding that wiring, cabling, and air-conditioning systems were placed near the ceiling instead of under a raised floor, since cold air falls and hot air rises.
The air-conditioning units, though, operated at an inefficient 8-degree Fahrenheit Delta T (or the temperature difference between the supplied cool air and the returning hot air). The short 8-degree Delta T cycle meant that the overall room temperature needed to be kept at a cool 55 degrees in order to provide adequate removal of the heat in the room.
If the outside air was 55 degrees or below, economizers (which control the use of outside air) would kick in. If the outside air was slightly more than 55 degrees, then the economizers would blend this air with cool air generated from the air conditioners to achieve 55 degrees. If the outside air was 65 degrees or higher, then the cooling system would have to bypass outside air by circulating cool air inside the datacenter -- the costliest and most energy-intense option.
A dense datacenter changes the cooling rules
Transforming the facility into a high-density datacenter threw the cooling setup for a loop. The remake consisted of consolidating 343 servers to 177 via virtualization and server retirement, and replacing 50 storage systems with 10 new ones. The footprint of the storage equipment, in particular, was reduced from nearly 25 racks to just over 5 racks. Fewer storage machines meant less demand for chilled water and electricity. Specifically, savings for this higher-density setup included 94 tons of chilled water and 494,208 kWh of electricity, or US$60,000 in annual savings.
The problem was, a higher-density design pushes out more heat in tighter spaces, so NetApp needed to get better at cooling. The trick lay in segregating hot and cold air, rather than merely arranging cold and hot air aisles. The latter allows too much mixing of hot and cold air, and thus supplied cold air quickly becomes returning hot air without having done much cooling work.
Renne and his team placed wireless temperature sensors on 388 server racks to gain a better understanding of hot and cold air spaces in the new high-density layout. They used vinyl curtains to enclose hot aisles so that hot and cold air couldn't mix as freely. They installed a cogeneration system that captures heat waste and turns it into electricity to power chillers.
No more sweaters
The new cooling design reduced hot and cold air mixing and increased the Delta T to nearly 22 degrees. With more efficient expulsion of hot air and better use of cold air, the overall datacenter no longer had to be kept at a chilly 55 degrees. The new room temperature target: 67 degrees.
Economizers could now use outside air in the 70-degree range. "This literally gets us through spring," Renne says. And during summer months, economizers can continue to blend outside air with cool air generated from air conditioners to achieve 67 degrees (although the datacenter will still need recirculated cool air during the hottest days).
By using economizers more often, NetApp not only keeps its new high-density datacenter cool but stands to save about 930,000 kWh annually, or 15 per cent less energy usage from the old datacenter. This translates into US$105,000 savings every year.
So how much did this new cooling design cost the company? Not much. Total capital expenditure, says Renne, was US$167,000 -- not including the US$140,000 rebate from PG&E, the local utility, as an incentive to do this kind of project. "ROI was in three months," Renne says.