I came across an article in InfoWorld about a survey that TheInfoPro conducted among Fortune 1000 firms regarding their use of public cloud storage offerings. The bottom line: they haven't, they aren't, and they won't. 87 per cent of respondents stated they have no plans to use public storage-as-a-service, while only 10 per cent say that they will. Clearly, the survey indicates this market segment has no use for cloud storage.
But then I thought about the Cedexis survey I wrote about a few weeks ago. Cedexis evaluated a number of enterprise applications and found that, far from avoiding use of cloud computing, 35% of those applications touched the eastern region of Amazon Web Services sometime during their operation. Speculation at the presentation was that many of these applications included external services that accessed storage located in Amazon's S3 service.
How can one comprehend such different outcomes in these surveys of a fairly similar user base? Of course, there are the obvious reasons:
1. Maybe they aren't that similar. While they both describe their survey sample as enterprise, they might define enterprise differently and therefore get different results.
2. Maybe they have skewed samples. The two companies that performed the surveys might survey companies that they, for one reason or another, are more comfortable with or know better. Skewed samples lead to inaccurate results, as notoriously demonstrated in the Literary Digest polling fiasco of 1936.
3. Maybe they talked to different types of people. TheInfoPro may have talked to infrastructure or storage managers, who have no plans to use cloud storage. Cedexis, on the other hand, focused on applications (indeed, it was not clear from the presentation I saw that they talked to anyone, but rather traced execution paths of actual applications). In other words, the sample sets represented two entirely different roles, and the results reflected cloud storage use by that type of role.
I think it is this latter explanation that approaches the truth, and it reminds me a lot of discussions about open source within enterprises over the past decade.
Open source software was the cause of many embarrassed conversations with senior IT management. During a discussion about the benefits of open source, a CIO or senior IT executive would emphatically state: We have a policy against using open source software. There's no open source software in any of our systems.
Of course, if one then wandered the cubicles, one found that technical personnel readily admitted that they were using open source software components left and right. Reasons cited for this included:
• Ease of acquisition. Open source components were easily downloaded, with no need to experience overly aggressive sales people or confront uncooperative procurement personnel. Need a component? Do a search, hit the download link, and ten minutes later you're moving forward with your engineering task.
• Better functionality. Open source software often seemed more fully featured and higher quality than the packaged alternatives. And, in any case, getting access to the latest version did not require a lengthy negotiation for an upgrade license.
• Low cost. Many times I heard developers explain that they had been forced to turn to open source because strict project timelines has been forced upon them with inadequate funding for software procurement for components necessary for application development. Open source enabled project teams to get the job done with much lower costs and nobody higher up ever knew how such cost-effective projects were possible.
The net result of this disconnect was that many IT executives were unaware of the actual on-the-shopfloor practices of their own organization. A famous example: Jonathan Schwartz, erstwhile CEO of Sun, recounted in his blog an interaction with a CIO who maintained that no MySQL was used anywhere in her organization; the Sun sales rep then pointed out that 1300 copies had been downloaded by people with e-mail addresses from the company's domain.
I believe the same phenomenon is going on with regard to cloud computing, with the same lack of awareness regarding what methods are being used to deliver applications. This can account for the strikingly discordant survey results described at the beginning of this piece.
The parallel between open source and cloud computing is this: the low cost and ease of access encourage developer adoption, with little awareness or control by IT management. The implications of this fact are the following:
• Applications will be rolled out quickly, but with elements that do not conform with, or actively conflict with, "official" IT policy. This is sometimes referred to in a disparaging fashion as "shadow IT"; the positive spin is usually something along the lines of "getting something done because the IT group never delivers."
• IT will end up supporting systems within which are components it has no awareness of nor support plans in place. The implications regarding GRC are obvious -- how can you maintain compliance and risk management when you don't even know whats in your systems? I always felt the fact that many applications contained open source components that no one but the developer knew were being used was an indictment of the management controls in place in those organizations. After all, if your main job is to use technology to deliver business value and you don't know what technology is in place, isn't that a basic failing of responsibility?
• Business value will be front-loaded. It's easy to point out the problems of this adoption, but the fact remains that developers adopt these easily accessible tools to deliver systems more quickly, which reduces the time to achieving business value. The lengthy project plans imposed by commercial procurement, etc., have the effect of putting spend early and delivering business value late. Developer adoption of cloud reverses that equation and offers opportunities to obtain business value sooner. In fact, it can be something of a misnomer to characterize this phenomenon as purely a developer decision, as it paints a picture of an out-of-control engineer who refuses to play by the rules. Many (if not most) times, the engineers are responding to business-generated requests based upon needs of time to market or competitive pressure.
One interesting difference between open source and cloud computing presents itself: open source never got much attention from senior IT management, despite its promise of reduced cost, while senior IT management is all over cloud computing. Why the different reaction to two very similar opportunities?
Here are some potential reasons:
• Cloud computing addresses a more centralized problem. Software components are spread throughout applications, so the money is dispersed among many budgets, so the total spend and potential savings isn't obvious. By contrast, hardware lives in one place in the budget and so the size of the issue is clearer, and thereby generates more attention and will to address the problem.
• IT organizations are used to changing out hardware and infrastructure, while applications are treated with a "touch as little as possible and change only when absolutely necessary." The rapid hardware refresh cycles provides more confidence that implementing cloud computing is tractable.
• Cloud computing seems easier to solve. Rewriting applications is a pain, and no software vendor encourages an IT organization to consider moving to a new component; in fact, they take pains to illustrate how difficult it will be to switch components. By contrast, infrastructure and hardware vendors are eager to suggest their customers look to move to cloud computing because it will require a whole new round of investment and therefore they have a vested interest in convincing their customers that moving to cloud is straightforward. Whether it truly is straightforward, of course, remains to be seen.
The motivations for developers to adopt cloud computing seem eerily similar to those for open source software use. It seems extremely plausible that a similar outcome will occur: years after the "official policy" forbade the use of open source, it has, in fact, grown and become the de facto way systems are developed. Trying to control an irresistible tide is futile, as King Canute discovered to his discomfiture.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.
Read more about virtualization in CIO's Virtualization Drilldown.