Although virtualization is still used primarily as a means to consolidate servers, the technology's next big win could be in testing.
Testing software changes prior to release has long been common practice for Q&A labs and application development teams, and now it is becoming a higher priority for IT operations groups. But IT is inundated by a large and growing number of change requests, each varying in scope and complexity. Every change needs to be implemented as soon as possible in an increasingly complex network of systems, all of which are highly interdependent.
A recent survey by Research Edge of more than 400 IT operations professionals revealed many testing trends, most of them ugly: Testing environments are incomplete or nonexistent; IT cannot keep up with the rapid pace of changes; and changes to some of the most important multitiered applications, like those for e-commerce, are relatively untested.
Virtualization may be the answer. It provides:
1. Quicker test environment provisioning -- Current IT testing methods require a lot of manual and redundant effort. IT teams must build physical replicas of the production stack in order to effectively stage a representative testing environment. Because production systems are complex, representative staging environments are costly to build and hard to maintain. Also, undocumented changes occur in the production environment, causing the actual production configuration to drift away from the staged pre-production environment, resulting in incomplete change testing.
Virtualized testing environments are not built, but rather imported. The entire production environment can be saved as a virtual image and used for testing purposes, keeping manual and redundant preparation work to a minimum. The speed and flexibility of virtual imports keep costs low, both in terms of man-hours and in the bandwidth needed to operate such a project. Drift becomes a nonissue, as the environment is easily updated to represent what is running in production with each new import.
2. The end of "patch and pray" -- Commonly followed by organizations that cannot afford the time or money to create a manual testing environment, "patch and pray" bypasses testing altogether by implementing changes and praying for success, putting the business at the greatest risk for downtime. Nearly one-third of IT departments lack any form of rudimentary testing technology, instead relying on luck to see their changes through.
The analyst group Enterprise Management Associates (EMA) found that a significant portion of production problems are caused by changes, including patches, upgrades and new service offerings. Organizations lacking a mature approach to change deployment ("patching and praying") could see up to 80 per cent of these problems seriously affecting production systems, ultimately ending in downtime and high IT costs.
Research Edge's report also showed the extent of the damage that "patch and pray" can cause, with 43 per cent of companies reporting that insufficient preproduction testing was a key reason for downtime. The same study showed that nearly 22 per cent of application changes lead to significant delays or service outages, and 10 per cent of application changes were never made due to poor planning and no testing.
The same study showed that 65 per cent of unscheduled downtime is due to changes made to the IT infrastructure stack. It is clear that changes must be tested prior to rollout or businesses will be forced to deal with downtime and degraded performance.
Applying virtualization to this problem gives all organizations, regardless of size or resources, a relatively inexpensive solution to IT change testing. No massive increase in hardware or personnel is needed, as IT departments can make a virtual image of their operating environment, test their changes to it and analyze the repercussions during the course of their work day without any threat to their production system.