The term "cloudbursting" was coined by Amazon Web Services evangelist Jeff Barr to describe the use of cloud computing to deal with overflow requests, such as those that occur during seasonal rushes to online retail sites.
Rather than invest in additional hardware, software and personnel to scale and manage the myriad pieces of infrastructure necessary to increase capacity for Web applications, cloudbursting enables you to take advantage of the cloud to increase capacity on an on-demand basis.
Cloudbursting addresses two basic problems. First, companies periodically need additional capacity, but the return on investment for infrastructure to handle peak loads is exceedingly long because the extra capacity is only used occasionally.
Second, companies are hesitant to move all infrastructure to a cloud computing provider due to security and stability concerns. While cloudbursting doesn't eliminate that exposure, if there is a problem with the cloud it isn't the disaster it would be if the cloud handled everything.
Cloudbursting effectively enables organizations to treat the cloud like a secondary data center. They maintain and control their infrastructure and applications while leveraging the ability of clouds to expand and contract dynamically, making it financially feasible to use additional resources periodically without a large investment.
What's the catch?
The actual network and application delivery infrastructure requirements are fairly straightforward and based on existing, well-understood methods for implementing global load balancing. This makes cloudbursting appear rather simplistic, but as is usually the case, application issues such as data replication and duplication make the entire process more difficult if not impossible for some applications.
While databases can be replicated in real time over the Internet, this is only feasible if you have a high-speed, low-latency link between the data center and the cloud provider. This means most organizations won't be able to take advantage of real-time replication, or mirroring, to address replication and data duplication issues. A more likely scenario is that it will be necessary to keep the cloud version of the data as up to date as possible and replicate it on a regular basis.
Once the application instance in the cloud is no longer necessary, the data will need to be merged with the local database, through import or replay of transaction logs. Some developers have solved this issue by implementing replication applications of their own which trigger on database activity and use Web services to replicate the data back to the local data center. These solutions are not perfect and carry the risk of incurring manual intervention to clean the data when it is reintroduced.