Decisions, decisions: Choices abound as data center architecture options expand
- 22 December, 2014 22:09
When the American Red Cross talks about mission-critical systems, it's referring to the blood supply that helps save lives. The non-profit organization manages 40% of the U.S.'s blood supply, so stability, reliability and tight security are of paramount concern, says DeWayne Bell, vice president of IT infrastructure and engineering.
With some 25,000 employees and volunteers at about 500 locations around the country, the Red Cross used to manage two data centers, but "we found out that more and more of the enterprise systems we deployed needed higher availability, 24/7 support, and redundancy and resiliency," says Bell.
"We didn't have the capability any more to manage the infrastructure," Bell says, and yet the organization did not feel comfortable moving to a public cloud environment because of security concerns.
Seeking a "more efficient data center," the Red Cross outsourced its data center operations to CSC in 2007 and moved to Unisys in 2012 because of lower costs, according to Bell. Because the organization didn't have a large IT staff, he says it didn't make sense to build an updated, consolidated data center that was highly secure and then try to manage it in-house. The five-year contract with Unisys is valued at over $80 million, and includes on-site services and help desk support, according to the vendor.
For apps that are not as critical to the business, such as email, the Red Cross is using cloud software including Microsoft Office 365. "If email goes down, our business functions won't stop. But for things that could stop the business, we keep those systems managed and hosted at Unisys," Bell explains.
The Red Cross is in good company. As organizations look to modernize their IT infrastructure, data center choices are expanding. Cloud, colocation, modular, outsourcing, virtualization and more efficient servers are all vying for your attention, and figuring out the right direction is fraught with challenges, to say the least. And even after you make a decision, you need to revisit it every now and again in light of shifting business needs and new technology choices, as Facebook did with its recent implementation of a more modular network in its Iowa data center.
More companies are opting to move away from traditional data centers with rows and racks of servers because there are a number of issues to contend with in a conventional data center model, including buying your own equipment, figuring out a floor plan, installing it, testing it and maintaining it, experts say. The number of data centers worldwide will peak at 8.6 million in 2017 and then begin to decline slowly, IDC predicts, although the amount of total data center space will continue to grow as mega-data centers replace smaller ones.
A majority of organizations will stop managing their own infrastructure and make use of on-premises, hosted managed services and shared cloud offerings in service providers' data centers, says Richard Villars, vice president of data center and cloud research at IDC. As a result, there will be a consolidation and retirement of existing internal data centers, while service providers continue to build, remodel and acquire data centers to meet the growing demand for capacity, Villars says.
Additionally, in the last five or six years the physical footprint of servers has been shrinking, especially with the rise of virtualization, and the number of servers that companies traditionally have had to manage is no longer necessary, explains Steven Hill, senior analyst of data center solutions at Current Analysis. "Data centers designed around physically larger environments all of a sudden have all this space available."
The data center architecture choices available today mean that companies can focus on providing a higher level of service to their end user community, refining processes and gaining efficiencies. But with choice comes the question of how to figure out what's right for your company. And these days, the question of whether to buy energy-saving servers and other gear is also part of the equation.
"Making decisions comes down, ultimately, to complex issues -- at what point do you redesign and start over or continue to work with the space you have and deal with the compression?'' asks Hill. With compressed hardware come challenges with cooling and power delivery. Server racks used to average around 5 kilowatts per rack, he notes, and now it's easy to go to 15 to 20 kilowatts in the same rack space with denser blade servers, for instance.
The rise of cloud computing is also changing the way companies are viewing their data centers because relying on public cloud services can mean less work for the in-house systems. Colocation for disaster recovery and backup is another factor. "A lot of decisions are made on compliance issues or corporate governance that says you need to have a second site," at least 1,000 miles away, to prevent data loss in the event of a disaster, says Hill.
Besides the obvious security and budget concerns, another factor to consider, as in the case of the Red Cross, is whether you have enough skilled personnel to run a data center in-house. And ongoing management of a data center can be some of the highest costs a company has, Hill says. "The cost to power it, cool it and manage it far exceeds the cost of a server itself over a period of time," he explains; other costs are for updating server hardware and software.
Location is another consideration. As Hill points out, energy costs are usually going to be more expensive in Manhattan than in rural Iowa, for example, and so will the price of square footage.
All of this is leading more companies to turn to cloud or hosted resources, he says. That alleviates the need for maintaining the data center and doing hardware upgrades. "You don't have to deal with" capital expenses, Hill says. "The cost of physically managing that environment becomes the responsibility of your provider rather than being an expenditure of your company."
Dealing with rapid growth
Companies experiencing growth are also finding their data centers can't always stay as efficient as they need them to be. "IT and business are so tightly linked now that a company can only grow as fast as its IT infrastructure will allow," says Tony Iams, managing vice president at Gartner. This was a concern for Carfax, a provider of used vehicle history reports. It had two data centers and several vendors providing a variety of services. Meanwhile, the company has experienced an average of 15% to 18% growth annually, and space, power and cooling became problematic as IT condensed server racks, says Chris Thomas, network manager at Carfax.
Carfax opted for colocation and is now renting floor space at data center provider CyrusOne. The company maintains five data centers; two are for internal support and three are for customer support. The benefit is that Carfax can continue to grow and doesn't have to worry about scaling the heating and cooling and space in the main data centers, Thomas says.
If you build your data center in three different locations to ensure four or five nines of redundancy, he says, "there's zero customer impact,'' says Thomas, because if there is a problem with one data center, the other two "take over and handle the load and manage the traffic."
The target goal for Carfax's three colocated data centers is for each to run at 33.3% for equal load balance of servers and storage, with a plus or minus based upon a customer's location, he says, adding that IT systems will direct a user's request to the fastest responding data center at that moment.
(Story continues below sidebar.)
The virtual world
When companies want to avoid having to revamp their physical data centers, another option is virtualization. Purdue Pharma, a privately held pharmaceutical company, was experiencing "a significant amount of server sprawl" with its HP systems. The company was also using HP blade chassis, and each chassis was "an island unto itself" with different code levels and complex setups, says CTO Stephen Rayda.
Also, its disaster recovery plan relied on software that was going to be discontinued by the vendor and was protecting only the 20 most critical apps in the company, while the rest were backed up on tape, "and whether we could recover the full system was questionable," Rayda adds.
If Purdue Pharma expanded its data center, "we would have had to find a way to bring more power into the building," he explains, and "it would be a significant cost. So we were in a difficult spot, to say the least," he recalls. Some of the business concerns were the pace at which IT would be able to recover in the event of a disaster and the drive to lower infrastructure costs.
The company considered different options; among them, investing millions of dollars to build a new data center or completely reinvent how it was doing things and eliminate tape backup and server sprawl.
"After doing the business cases and evaluations, it became clear the better path was to reinvent" the company's two data centers -- one at company headquarters and a disaster recovery site at one of its associated plants.
At the same time, Rayda was also concerned about the amount of time it would take to get new IT services to the company's 1,750 users in the U.S. "If everything is physical, obviously things take longer to procure and provision,'' he says.
Perdue Pharma had virtualized some of its servers already in a "cobbled-together solution that was very complex,'' Rayda says. IT decided to deploy VCE's Vblock to create a converged infrastructure and use VMware's vSphere for virtualization. After implementation, the number of servers running Oracle databases went from 49 down to eight. "The interesting thing is with the reduction to eight servers we were able to increase performance,'' he notes.
By virtualizing 95% of its applications, Purdue Pharma has seen a 75% reduction in the data center footprint at a savings of $9 million over five years in capital expenditures and $2.5 million in operational expenses for energy costs. There has also been a 50% reduction in capital IT expenses related to the hardware refresh, he says.
"So the net-net was better performance and higher availability with a fraction of the cost and management," Rayda says.
Current Analysis' Hill is a fan of the virtualization approach. "It's easy to manage that environment because the tools and resources have been time-tested," he says.
Deciding how to proceed
For companies trying to figure out their next move when it comes to data center considerations, Hill suggests starting by working backwards. You need to have a good understanding of your workloads and where the peaks and valleys are, as well as the ebbs and flows of your production environment, he says.
Once you've established your hardware requirements, you have to deal with power and cooling. Enterprises tend to have organically grown data centers with a mix of old and new technologies, Hill says. Among the challenges of building your own data center are the compression of hardware and a workload designed for four racks now fitting into one. "All of a sudden you have to deal with ... that one [high-capacity] rack rather than those four lower-capacity racks."
Another issue occurs when, in times of an economic downturn or hard times for the company, IT typically gets its budget cut and has to lengthen its refresh cycle from a standard three to five years to five to seven years or longer. Then the company is sitting on antiquated hardware.
If you decide to move to the cloud or outsource to a managed services provider, require the vendor to refresh hardware every 36 months, advises Bob Mobach, director of data center solutions at Logicalis.
Companies undergoing significant growth may choose colocation if they find that their existing data centers are facing hard limits in terms of space, power or cooling capacity, either now or in the future, says Gartner's Iams. "With some colocation services, it may be possible to build assurances into contracts that guarantee the colocation service will be able to provide facilities to handle expected future growth. With such terms in the contract, companies can defer the risk of meeting the needs of future growth to the colocation service, rather than having to accurately project future data center capacity and then make the investments to build that capacity themselves."
In terms of trying to go green with your architecture, Mobach says that's something that "should always be on the forefront of everyone's mind ... EPA ratings with Energy Star labels are available for servers as well as data centers and will result in thousands of dollars of savings." However, green data centers require an extra investment and may not be suited to SMBs. "For small data centers, these investments oftentimes are disproportionate and do not result in direct savings," if the total wattage of the data center is well under 40 kilowatts, he says.
"Going green is not an inexpensive proposition; it's the right one, but it will definitely require 10 to 30% extra cost overall," Mobach says.
Pondering your next data center move is a huge financial consideration that, like anything else, requires careful evaluation and planning. If you opt for outsourcing and/or cloud integration, vendor SLAs should be closely studied to determine remediation policy and overall client attention to ensure a similar experience to internal provisioning. For large enterprises, the evaluation of current asset investments, contracts and depreciation schedules should all be taken under consideration.
Ultimately, Hill believes, it is often easier to go with the "as-a-service" model than it is to build a data center yourself. "Now we're talking about an environment where so many tasks have become generic enough that it's easy to find resources that match your requirements."