With a student population of less than 4,000, Bryant University certainly isn't the largest institution of higher learning in the United States. It's regarded as one of the most wired schools in the world, however, boasting "one port per pillow" in the dorms.
Yet before 2007, the school's IT infrastructure was sorely lacking. It was running 75 servers at a woefully low utilization rate of less than 10 per cent -- plus the machines were spread out across three rooms. On top of that, the school determined that the facilities lacked the space, power, and cooling capacity to meet the ever-increasing computing and communication demands of its staff and students.
"In the past 10 years, Bryant University has added 144,000 square feet of new facilities and has experienced a 300 per cent rise in applications," said CIO Arthur Gloster. "We were not properly set up to handle this growth, and it was time for Bryant University to make a cost-saving and energy-efficient change."
Cooling only what needs to be cool
Bryant enlisted the services of IBM and APC to construct a datacenter built around Big Blue's Scalable Modular Data Center offering -- not unlike Sun's portable Project Blackbox. Built with standardized components, the design lends itself to quick, plug-and-play-like expansion.
Among its energy-saving features is an efficient cooling system that uses what APC dubs in-row cooling. Situated between equipment racks, the units push cold air directly in front of the racks, automatically adjusting cooling levels depending on feedback from a temperature sensor.
This approach is far more efficient than traditional perimeter cooling, where cold air is pumped throughout the entire datacenter, even in areas where it's not needed. "When they come into the datacenter, many people remark that it feels too warm -- they are accustomed to heavily air-conditioned datacenters," said Rich Siedzik, the school's director of computer and telecommunication services.
Virtualized blades save space, power and cooling
Bryant also has fewer servers to cool now: The workloads from the school's 75 underutilized, heterogeneous servers were virtualized onto 40 energy-efficient blades, saving around 50 per cent on floor space. Sweetening the move: The school doesn't need to buy a new server each month to keep up with its growing needs. "Previously, [we would have been] buying and installing 12 new physical machines, then paying the additional power and cooling costs over their full lifecycle," Gloster says.
The new architecture has also give the Bryant IT team invaluable visibility of its energy consumption on a per-rack, per-device level, according to Siedzik. They're now installing agents on their messaging platform, which will gather energy metrics at the OS, application, and transaction levels. "We will we be able to trend our usage patterns on these boxes along with the capability to cap power levels during underutilized periods. So our direction is one of visibility, control, and automation," Siedzik says.
All told, the benefits are numerous and pretty impressive: Gloster estimates the modular approach with the smarter cooling is "30 per cent more efficient in power and cooling terms alone" compared to a traditional datacenter infrastructure, "both reducing expenditure and underlining the university's commitment to environmental sustainability."