image
Understanding Provisioned Throughput
Start course
Difficulty
Intermediate
Duration
1h 32m
Students
20918
Ratings
4.6/5
Description

Please note this course is outdated and has been replaced with the following courses:

 

This course provides an introduction to working with Amazon DynamoDB, a fully-managed NoSQL database service provided by Amazon Web Services. We begin with a description of DynamoDB and compare it to other database platforms. The course continues by walking you through designing tables, and reading and writing data, which is somewhat different than other databases you may be familiar with. We conclude with more advanced topics including secondary indexes and how DynamoDB handles very large tables.

Course Objectives

You will gain the following skills by completing this course:

  • How to create DynamoDB tables.
  • How to read and write data.
  • How to use queries and scans.
  • How to create and query secondary indexes.
  • How to work with large tables. 

Intended Audience

You should take this course if you have:

  • An understanding of basic AWS technical fundamentals.
  • Awareness of basic database concepts, such as tables, rows, indexes, and queries.
  • A basic understanding of computer programming. The course includes some programming examples in Python.

Prerequisites 

See the Intended Audience section.

This Course Includes

  • Expert-guided lectures about Amazon DynamoDB.
  • 1 hour and 31 minutes of high-definition video. 
  • Expert-level instruction from an industry veteran. 

What You'll Learn

Video Lecture What You'll Learn
DynamoDB Basics A basic and foundational overview of DynamoDB.
Creating DynamoDB Tables How to create DynamoDB tables and understand key concepts.
Reading and Writing Data How to use the AWS Console and API to read and write data.
Queries and Scans How to use queries and scans with the AWS Console and API.
Secondary Indexes How to work with Secondary Indexes.
Working with Large Tables How to use partitioning in large tables.

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

We've already talked about provision capacity or provision throughput a bit in this course. Because you're asked to configure the provision throughput when you create a table, it makes sense to dig a little deeper into it now.

When you create a table in DynamoDB, you need to tell Amazon how much capacity you want to reserve for the table. You don't need to do this for disk space. DynamoDB will automatically allocate more space for your table as it grows. But you do need to reserve capacity for input and output for reads and writes.

Amazon charges you based on the number of read capacity units and write capacity units that you allocate. It's important to allocate enough for your workload, but don't allocate too much, or DynamoDB could become prohibitively expensive. By default, when you create a table in the AWS console, Amazon will configure your table with five read capacity units and five write capacity units. As we saw earlier in this lesson, you can configure provision capacity when you create a table by unchecking the checkbox labeled use default settings.

What happens if your table has five write capacity units but you try to make 20 writes in a second? Well, Amazon will allow you to burst above your provisioned throughput occasionally. This lets you handle occasional spikes without having to worry too much. But the burst capacity they provide is very limited. Once that runs out, your requests will be throttled. Requests exceeding your capacity limit will be denied with an error called a provision throughput exceeded exception.

So what is a capacity unit? Well, each read capacity unit will allow you to retrieve one record from the database no larger than 4 kilobytes using strong consistency which will make sure you always get the latest version of that item each second. If you have five read capacity units, then DynamoDB will let you make five requests per second. The counter resets after every second. If your records are larger than 4 kilobytes, then you'll use one read capacity unit for every 4 kilobytes. These round up so a 5-kilobyte item would count as two read capacity units. And if you elect to use eventually consistent reads rather than strong consistency, you'll only be using half as many units. You can specify this on each query.

Generally, I recommend using strongly consistent reads for interactive user facing operations, like in your web application. And using eventually consistent reads for batch jobs that stand through the entire table. Or other background activity that can afford to sometimes be a few seconds behind the latest data in the table.

Likewise, you will use one unit of write capacity to store a single record in your DynamoDB table, no larger than 1 kilobyte in size every second. Just like with read capacity, if your record exceeds 1 kilobyte, you would need more than one write capacity unit in order to store it in the table.

You can configure read and write capacity on each table when you create the table like we already saw. You can also adjust it on the fly for an existing table. It doesn't harm anything to make regular adjustments. The table will still be fully operational even while capacity is being adjusted.

Let's go back to the AWS console and take a look at our order line items table. This time let's go to the metrics tab. Let's look at the first graph, read capacity. This graph shows you the read capacity units that have been used over the last hour. The graph has a red line which shows how much capacity is provisioned, and a blue line showing how much has been used. Amazon does allow your traffic to occasionally burst above the amount of capacity that you've provisioned, so you might actually see the blue line be higher than the red line for a brief period of time.

The next graph to the right is throttled read requests. If you've run out of capacity at all during the last hour, this graph will show you how many reads or queries have been denied due to throttling. There are similar graphs for write capacity and throttled write requests, but you can see that there's no blue line on the write capacity graph because I haven't been doing any writes to this table.

Now let's look at how to adjust the provision capacity on an existing table. To do that, we go to the capacity tab. This tab has a simple form asking you how many capacity units you want to provision for the table. As you change the capacity levels, it will show you an estimate for what that will cost each month. Let's adjust this table to 100 read capacity units and 50 write capacity units, then click save. Once we hit save, you'll see that the capacity is updating. It will take a few minutes for the new capacity level to take effect. If we go back to the overview tab while this is updating, you'll see that the table status has changed to updating as well. The table is still online and reads and writes work just like they did when the table status was active. When the new capacity level has taken effect, the table status will automatically change back to active.

This concludes the material about creating tables with Amazon DynamoDB. Continue to the next lesson where we'll show examples of reading and writing data into your tables using the AWS console and the programmatic API.

About the Author

Ryan is the Storage Operations Manager at Slack, a messaging app for teams. He leads the technical operations for Slack's database and search technologies, which use Amazon Web Services for global reach.

Prior to Slack, Ryan led technical operations at Pinterest, one of the fastest-growing social networks in recent memory, and at Runscope, a debugging and testing service for APIs.

Ryan has spoken about patterns for modern application design at conferences including Amazon Web Services re:Invent and O'Reilly Fluent. He has also been a mentor for companies participating in the 500 Startups incubator.