Buying faster switches might not be the only way to amp up performance across data center networks, according to researchers at the University of California, San Diego, who this week proposed a network architecture that would enable commodity Ethernet switches to deliver better performance at a lower cost than their 10 Gigabit Ethernet counterparts.
Amin Vahdat, computer science professor at UC San Diego, presented research findings at SIGCOMM 2008 in Seattle that laid out how the principles behind clustered computing could be applied to network architecture to improve scalability and performance at reduced costs.
"Data centers are not being built on high-end components, but on the networking side we still rely on high-end, leading-edge technology," says Vahdat, one of three authors behind a paper titled “A scalable, commodity Data Center Network Architecture."
Instead of investing in specialty gear, such as 10 Gigabit Ethernet switches and routers, and using a standard three-tier architecture, Vahdat says companies could use commodity Ethernet switches in a "fat-tree" architecture at a much lower cost to achieve the same performance. Anecdotally he explains a 20,000 node network using pricey switches could cost as much as US$28 million to construct; with commodity gear, the same network would ring up at closer to $4 million.
"These are optimistic numbers, but taking advantage of the commodity side of things will be incredibly disruptive to computing and technology," Vahdat says. "There are examples with telephone switches and expertise with clusters of commodity PCs throughout history that show commodity components taking over an industry."
The findings come at a time when high-end data centers supporting tens of thousands of nodes are emerging and breaking the old model for networks. This is causing enterprise IT managers to look for options, given that standards around 40 Gigabit Ethernet are still several months away and those for 100 Gigabit Ethernet are years off.
"Ethernet technology is creeping into the last few bastions of proprietary/niche network technology in the enterprise, which exist mostly in the data center. Think Fibre Channel for storage, Infiniband or Myrinet for high-performance computing and clustering applications," says Phil Hochmuth, senior analyst at Yankee Group. "The low cost and flexibility of Ethernet are the drivers behind this trend, and UCSD's research is a good example of just how far this idea can go."
Vahdat says the price argument might not be compelling today as the difference is minimal, but when 40 Gigabit Ethernet becomes a reality "the price per port will be huge and the port density will be tiny."
The team of researchers began work on solving the problem of high-cost, poor-performing, high-end networks about a year ago. Talking with several companies revealed complaints about costly, complex data centers that lacked adequate bandwidth. The problem has been exacerbated in the past three years, Vahdat says, as more companies build out big data centers that don't deliver the performance promised and require substantial investments. While the cost using today's technology cannot be considered the main driver for adopting this proposed architecture, price will become an issue in the near term.
"Today is the least compelling time to be thinking in terms of price per port, but once you push 10 Gigabit Ethernet to the edge, the existing designs do not work," he explains. "If you want to push 10 Gig to the servers in a large cluster, you won't have any option."
Using a fat-tree topology, Vahdat and his fellow researchers designed a means of interconnecting Ethernet switches that would make all switching elements identical, which makes it possible to "leverage cheap commodity parts for all of the switches in the communication architecture." The fat-tree topology deviates from the three-tier core, aggregation and edge layout of traditional networks in that it doesn't rely upon aggregating to higher speed links, or specialized hardware, when moving up the tree. To enable the use of homogeneous gear, fat-tree topology networks rely upon protocols being able to take advantage of all the different paths available in a network, he explains.
"We believe you don't have to modify the chips, but that it could be done with software and with relatively small modifications," Vahdat says.
The researchers propose that using two-level routing tables and flow classification techniques, among other methods, would enable the switches to route traffic across the fat-tree diagram without creating bandwidth bottlenecks. Yet Vahdat says his teams still must resolve two questions. The first remaining issue is exactly how many modifications, with software or otherwise, network managers would need to make to enable the switches to route traffic in a way that takes advantage of the fat-tree topology. The second unresolved issue is around cabling. Right now, networks using high-end 10 Gigabit Ethernet switches would have fewer cables connected into them than a fat-tree network, which would require 100 separate cables for 100 low-speed links.
"We are working to determine what the minimal set of modifications to existing switches to enable routing is and to address the cabling issues that come up with a fat-tree topology," he explains.