8 keys to DynamoDB success

How to ensure that Amazon’s fast and scalable key-value database works for you, not against you

DynamoDB, a fully-managed NoSQL database, is an impressive piece of technology, and it’s amazing that AWS has opened it for the entire world to use. What took millions of dollars in R&D to build – a product that services millions of queries per second with low latency – can be effectively rented for dollars per hours by anyone with a credit card. For those who need a key-value store that can store massive amounts of data reliably, there aren’t many better options.

While DynamoDB generally works quite well, it’s inevitable that we all run into issues. A few months ago at Segment, my colleagues wrote a detailed blog post about our own DynamoDB issues. Mainly, we were hitting our rate limits due to problems with our partitioning setup – a single partition was limiting throughput for an entire table. Solving the problem took a superhuman effort, but it was worth it (to the tune of $300K annually).

You can read the full story here. But to save you the time, I’ve distilled the experience of our engineering team into eight pieces of advice that should help you make the most of DynamoDB and ensure that it really works for you.

1. Ask yourself: Do you actually need DynamoDB?

First, you should know if DynamoDB is actually the right tool for the job. If you have a small amount of data, require aggregations or the fine-grained ability to join together lots of data, DynamoDB probably isn’t the right tool for you. RDS or Aurora is probably your best fit, or in the case where durability doesn’t matter and you don’t need aggregations, Redis on ElastiCache.

2. Read the detailed DynamoDB documentation – all of it!

While almost everyone reads the general AWS documentation (how else will you get things up and running?), the sections recommending how to actually use the tool and lay out your data at scale are easy to miss. These sections are typically pretty dense. Since DynamoDB isn’t open source, there is a bit less literature on the stress-testing and benchmarks for it. However, it is a necessity to read these sections inside and out in order to master the tool. So do it.

3. Pull in Amazon for help when needed

AWS has many tools on their side to help diagnose parts of your account. We’ve had the best luck reaching out to our account representative for everything from limit increases to detailed technical support. They’ve been indispensable for putting us in touch with the right people (including engineers on the product side, who have been insanely helpful) and fast-tracking our support requests.

4. Read before write, if possible

In DynamoDB, read throughput is five times cheaper than write throughput. If your workload involves a lot of writes, see if you can read the data first to avoid updating in-place. Reading first will help avoid throttling and cut your bill in a write-heavy environment where keys may be written multiple times.

5. Batch writes by partitioning upstream

If all information about a given key is sent to the machine upstream in Dynamo, you can actually batch together data and save on writing to it. Instead of writing every time you get a key update, you can batch together all of the information instead, and then write it once per second, or once per minute. Batching allows you to adjust your latency requirements and balance them with the cost of Dynamo. Partitioning (in a system like Kafka or Storm) allows you to avoid any sort of locking or race conditions that might come from multiple concurrent writers.

6. Dynamically adjust your throughput on spikes

If your traffic is bursty, you can achieve significant savings by “auto-scaling” your DynamoDB throughput to match your actual load. In fact, AWS just launched this feature, which you can read about on the AWS blog (my team has been using a fork of the dynamic DynamoDB project for quite some time, so this is a welcome development). For additional cost savings, you can adjust how DynamoDB throughput is provisioned versus how much you are using it with AWS Lambda and CloudWatch events.

7. Leverage DynamoDB Streams

DynamoDB has a little-known feature that can publish all changes to what is essentially a Kinesis feed. Streams are very useful for building other pipelines, so that you aren’t constantly running SCANs or doing your own eventing.

8. Log all of your hot shards

When my team faced excessive throttling, we figured out a clever hack: Whenever we hit a throttling error, we logged the particular key that was trying to update. In aggregate, this gave us a holistic view of what was going on, and allowed us to blacklist certain problematic “hot keys.”

DynamoDB will perform verydifferently depending on how your data is laid out and how you decide to query it. It’s important to understand your query patterns, indexing needs, and throughput. Although DynamoDB is a cloud service and run by AWS engineers, it doesn’t excuse poor architecture decisions. There is no “magic” under the hood. DynamoDB is a great piece of technology. Using it correctly can make the difference between a $1M services bill and one that’s a mere fraction of that.

Join the newsletter!

Error: Please check your email address.

More about AmazonAWSKinesis

Show Comments

Market Place

[]