Servers get most of the glory when it comes to energy management, but networking gear is about to catch up.
Over the past year, network equipment vendors have began to emphasize energy efficiency features, something that was never a top priority before, says Dale Cosgro, a product manager in Hewlett-Packard Co.'s ProCurve network products organization.
Networking infrastructure isn't in the same class as servers or storage in terms of overall power consumption -- there are far more servers than switches -- but networking can account for up to 15% of the total power budget. And unlike servers, which have sophisticated power management controls, networking equipment must always be on and ready to accept traffic.
Data center energy stats
How much power does data center gear consume?
- Cisco Catalyst 6500 series switch, fully populated: 2kW/rack to 3kW/rack
- Cisco Nexus 7000 series switch, fully configured: 13kW/rack
- Fully loaded rack of servers, average load: 4kW/rack
Also, networking power use at the rack level is significant. A Cisco Catalyst 6500 series switch consumes as much as 2kW to 3kW per 42u-high rack. Cisco Systems' largest enterprise-class switches, the Nexus 7000 series, can consume as much as 13kW per rack, according to Rob Aldrich, an architect in Cisco's advanced services group. A 13kW cabinet generates more heat than many server racks -- enough that it requires careful attention to cooling.
By way of comparison, most data centers top out at between 8kW and 10 kW for server racks, says Rakesh Kumar, analyst at Gartner Inc. The average cabinet today consumes about 4 kW, says Peter Gross, vice president and general manager of HP Critical Facilities Services.
Vendors have already adopted some energy-related features, such as high-efficiency power supplies and variable-speed cooling fans. But with switches, there's a limit to what can be done in the area of power management today. Most idle switches still consume 40% to 60% of maximum operating power. Anything less than 40% compromises performance, says Aldrich. "Unless users want to accept latency, you have to have the power," he adds.
But huge improvements are coming, says Cosgro.
More efficient technology
Technology improvements that favor energy efficiency are gradually evolving in several areas. "As new generations of products hit the market, more of these kinds of features will be implemented," says Cosgro. Some examples:
- More modular application-specific integrated circuit (ASIC) designs that allow switches to turn off components not in use, from LED panel lights to tables in memory.
- General advances in silicon technology will minimize current leakage and gradually boost energy efficiency with each new generation of chips. Eventually, says Cosgro, "we should be able to get networking equipment that uses 100 watts today down to 10 watts."
- The development of more efficient software that consumes fewer CPU cycles -- and less energy.
- Equipment designs that run at higher operating temperatures to reduce cooling costs.
For example, Cosgro claims that HP's current ProCurve equipment can run safely at temperatures up to 130 degrees -- higher than the specifications for most other data center equipment. "That's driven by requirements of IT managers who want to run data centers at higher temperatures," he says.
Higher operating temperatures may work in a single-vendor wiring closet, but network equipment vendors will need to do a better job of testing in mixed environments before temperatures approaching 130 degrees can be sustained -- especially within racks in the data center. "No one knows how networking and other types of equipment will react when sitting next to servers that displace more BTUs," says Drue Reeves, an analyst at Burton Group.
Today, each vendor tests with only its own equipment.
Another focus: improvements in power monitoring, and management with more granular controls. Real-time power and temperature monitoring is key to any data center and is essential for managing growth. "If something is not right, you want to know about it before a catastrophe happens," says Rockwell Bonecutter, global lead of Accenture's green IT practice.
Management software could be configured to identify specific network equipment, such as voice-over-IP phones, by using the Link Layer Discovery Protocol. The software could then automatically shut off Power-over-Ethernet current for those VoIP handsets at a specific time of day or when the associated PC on each desktop is turned off at day's end.
Another example: Edge switches are typically connected to two routers for redundancy during the daytime, but the network could be configured such that one router goes into a low-power sleep mode at night. The second router would "wake up" only if and when it was needed.
These types of applications represent "a huge opportunity for savings," says Cosgro.
Better specs and standards
Emerging standards could soon help save energy during periods when networks sit unused and will help IT compare the relative efficiency of competing products.
Energy Efficient Ethernet
The emerging IEEE P802.3az Energy Efficient Ethernet (EEE) draft standard may offer the biggest bang for the buck by cutting power consumption for network equipment when utilization is low. Today Ethernet devices continuously transmit power between devices, even when network traffic is at a standstill. Equipment supporting the EEE standard will send a pulse periodically but stay quiet the rest of the time, cutting power use by more than 90% during idle periods.
In a large network, that's "a whole lot of energy" that could be saved, Cosgro says.
The standard will allow "downshifting" in other modes of operation as well. In a 10Gbit switch, for example, individual ports that are supporting only a 1Gbit load will be able to drop power down from 10Gbit/sec. to what's required to support a 1Gbit/sec. configuration, saving energy until activity picks up again.
Products built to support the EEE standard should start appearing by 2011, says Aldrich.
Another emerging technology, the PCI-SIG's Multi-Root I/O Virtualization, provides servers within a rack with access to a shared pool of network interface cards. This happens via a high-speed PCI Express bus -- essentially extending the PCIe bus outside of the server. "Instead of a NIC in every server, you'll have access to a bank of NICs in a rack, and you can assign portions of the bandwidth of one of those NICs to a server," probably using tools provided by the server vendor, says Burton Group's Reeves.
Energy savings will come from increased utilization of the network -- achieved by splitting up the bandwidth in each "virtual NIC" -- and the need for fewer NICs and switch ports, he says. He expects to see standards-compliant products perhaps as early as 2012.