A new Cloud cache system developed at the University of Sydney promises to simplify the deployment of applications with large memory requirements.
Cache-as-a-service (CaaS) uses traditional caching protocols and gives users with big input/output (I/O) requirements — such as scientists, and the finance and e-commerce industries — the extra power needed to readily deploy high performance computing services onto the Cloud.
Director of the project, Professor Albert Zomaya from the university’s School of Information Technologies, told Techworld Australia that CaaS features an elastic cache system that allows users to rent the memory needed for their computing requirements.
Zomaya said CaaS can be offered as an add-on service by Cloud providers and will enable users to acquire extra cache for less money.
In-memory caching in the Cloud has also been used by Amazon ElastiCache and RAMCloud. However, according to Zomaya, these services use key-value style concepts whereas CaaS uses traditional caching concepts.
“A CaaS offering lets you pay for your requirements as you go, without incurring extra overheads,” he said.
“So what you’re saying to the user now is, ‘I’m providing you with the capability of getting memory but you don’t need to go and purchase extra physical infrastructure to be able to do that’.”
Zomaya said that because the extra cache is provided at the operating system level it is not visible to users and not accessible to anyone other than its owner, so there are no security issues.
“The cache is unseen,” he said.
“As the user, you don’t get to see this, you don’t get to interfere with it and that’s the whole idea because … users will have different capabilities and understandings of how the Cloud works.”
CaaS has been recently accepted following an evaluation and peer review. It is expected to be published in a journal next year.
Follow Diana Nguyen on Twitter: @diananguyen9
Follow Techworld Australia on Twitter: @Techworld_AU