Looming on the horizon are the nimbus, cirrus, stratus and cumulus that threaten to deliver us cloud computing imminently. Promising an end to most of the challenges and frustrations of IT systems as we know them, the concept of cloud computing is thundering through the business community to become one of the most talked about and revered subjects of the day.
Behind the hype seems to be a reality that, for once, the IT industry may be onto something truly game- changing that will not only radically cut costs, but also deliver a far better experience to the business or consumer user.
The expectations are huge. Banking analysts say that cloud computing will be a $160 billion market within the next five years, and every major IT company from Microsoft to Google and IBM to Dell, is desperate to be the rainmaker.
The question that comes to mind though is not “what” cloud computing is, but rather “why”. If it is such a great idea then why has it taken until now for the gurus of technology to deliver it?
The “what” question is just too easy; imagine a world where you could walk up to any computer, anywhere in the world and instantly access all your data and applications just as you left them last time you logged on – and somewhere, up in the clouds, a huge IT infrastructure was whirring and churning to deliver the IT services to you. Basically, think of the ease of getting electricity from a socket in your home that somehow connects to a generating station and you start to get the idea.
Why has it taken so long? Go back far enough in time and IT professionals always thought that computing would be delivered from the cloud and that the personal computer was nothing more than an aberration. Early mainframes where constructed to deliver IT services down wires to dumb terminals that could do no more than display text on a screen and take back digits typed into a keyboard.
These mainframes could handle hundreds or even thousands of users and if they had carried on evolving then, we would probably have had cloud computing in 1988 - rather than 20 years later.
In fact, Thomas J. Watson, founder of IBM, is supposed to have remarked that “there is a world market for about five computers”. He didn’t mean that these new fangled devices would never catch on (as Lloyd George unfortunately said about TV) but that his vision was of a few massive number crunching mainframes in the sky that could deliver their computational power to the users remotely.
What was not understood though, was the challenges that cloud computing would have to overcome. And this is where the answer to the “why now” question lies.
To deliver cloud computing requires five critical components: the scalability of the infrastructure to meet users’ needs; the resilience to accommodate the unexpected; the network to distribute the applications; and the ability to deliver an acceptable experience to the user at a reasonable cost.
When it came to scalability the reality was that you built or rebuilt your data centers once every five years to fit an estimated workload or users and traffic. The concept of “dynamic” or “on demand” capacity existed only as a concept. But something fundamental changed at the start of the 21st Century, when server virtualisation suddenly arrived on the scene, as a result of technology developed by VMware.
Where previously you had attached a given application to a server only to see users slow to a deathly halt during period of peak usage, now you could decide to vary the server capacity or resources available to a virtualised application and so scale it up or down according to demand. This was freedom for the CIO and MIS staff as they suddenly could adapt their business to the needs of the user community. It wasn’t cloud computing yet, but maybe the forerunner.