How used cloud to architect for growth

From its inception, took a bet on public cloud as a way of providing easily scalable infrastructure

When launched in 2009, CEO Matt Barrie and his team knew the audience for the online outsourcing marketplace was big. And it was only going to grow. The market opportunity was "going to be phenomenally huge," says David Harrison, the company's vice-president of engineering.

For, that market has translated into some 7.4 million verified users and more than a billion dollars' worth of projects posted on the site. At the start of last year, figures from the company indicated that more than 2000 new projects were getting posted to the site every day.

And it has been Harrison's job to architect the Web infrastructure to cope with this explosive growth. The end result has been that is almost entirely based in the cloud.

Harrison says that the company has just ".001 per cent" of their systems outside of the cloud, "sitting round doing some very old stuff for us". Everything else is stored in Amazon's public cloud.

Amazon launches local cloud support operation
Dead database walking: MySQL's creator on why the future belongs to MariaDB
From zero to hero: Suncorp's Drupal 'leap of faith'
Trisquel GNU/Linux flies the flag for software freedom

In 2009 the company possessed a handful of dedicated hosts but the team's sense of the potential for growth meant that early on it decided to go "effectively all-in" with cloud. It was quite a leap, considering that, although S3 launched in 2006, it was only in October 2008 that Amazon's Elastic Compute Cloud – EC2 – had officially come out of beta.

"We didn't want to be constrained, either by going out and either have huge amounts of hardware we invested in up front only to find out it [didn't] match our actual needs, or that we'd under-bought or over-bought or anywhere in between," Harrison says.

"S3 was a really stable product and we looked at that and we went, 'Look we can see where they're going with that'," Harrison says.

"EC2 was it had been out there and had been used a bit but obviously not to the level that it is now where it runs a large chunk of the internet. But we looked at that and we saw the potential and the promise.

"And we knew that in doing the migration that we were going to do, we had a really good opportunity to do it early on before we got to the point where we had much bigger infrastructure and the challenge of migrating across would be much, much greater."

Harrison said that the team had banked on the growing dramatically, and that cloud would offer them a flexible way of doing that, with the added benefit of "you only pay for what you eat" – the elastic, pay-as-you-go model.

Although in the years since Amazon launched its cloud the number of services offered by AWS has increased dramatically – adding products from cold storage service Glacier to the data-warehouse-as-a-service RedShift, which launched late last year – has relied on its internal technical expertise to manage a lot of its technology stack itself: an approach that's close to 'pure' infrastructure-as-a-service.

"We've got a lot of capability to really delve into some of the guts of some of the stuff we work with," Harrison says. So although employs some cloud services like AWS's Relational Database Service (RDS) and Elastic Load Balancing (ELB), much of the company's architecture is based around employing the raw compute and storage power delivered by the cloud and 'rolling their own' stack on top of that.

"So when it comes to running MySQL databases, we have a lot of experience doing that," Harrison explains.

"We've got a very specific workload that we've gone through and tuned for very carefully over a long period of time... Having that ability to jump in and look at the traffic maybe, or check all of the different system parameters and tune them specifically for our needs.

"In some cases, especially when you're experimenting with the new servers, actually being able to see what it's doing some times, and applying patches and sometimes communicating with the developers of the project you might be working with and working out patches with them – that's invaluable for us because we push so much at the outer edge of what's possible and try to create new solutions as we go."

Currently the company is exploring the potential for "more aggressive" use of AWS's regions to offer increased performance and resilience. "If you're in Australia and I want to serve you content, it's better for me to try to serve it from the Sydney region. And if you're in the US, I'd rather serve it to you from the US region close to you," Harrison says.

The company is also looking at how much work it can shift to Web clients through increased use of JavaScript. "For example, we've deployed some code that uses blackbone.js and having a client side stack that interacts with a server side stack.

"When you've got 7 million users and you keep growing at that pace, you don't want to keep permanently adding more and more infinite amounts of resource, however easy that is.

"You want to be able to do some things client side, and then really focus your central architecture on doing the things that you really want to do there – be that caching or be that expensive object calculation or whatever else."

Read part 2: Lessons learnt: Building scalability and resilience in the cloud

Rohan Pearce is the editor of Techworld Australia and Computerworld Australia. Contact him at rohan_pearce at

Follow Rohan on Twitter: @rohan_p

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computingamazonAmazon Web

More about Amazon Web ServicesLinuxMySQLSuncorp Group

Show Comments