1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Deploying a Highly Available Solution for Pizza Time

Managing DynamoDB data

play-arrow
Start course
Overview
DifficultyIntermediate
Duration3h 11m
Students1394

Description

In this group of lectures we run a hands on deployment of the next iteration of the Pizza Time solution. The Pizza Time business has been a success. It needs to support more customers and wants to expand to meet a global market.  

We define our new solution, then walk through a hands on deployment that extends our scalability, availability and fault tolerance. 

Transcript

Hi, and welcome to this lecture.

In this lecture, we will have an overview about managing DynamoDB data, we will talk about how to make backups of your DynamoDB tables, and we will also talk about cross-regional replication.

DynamoDB automatically replicate your tables across multiple Availability Zones within an AWS Region. That might be enough for you, but what happens is when an entire AWS Region goes out, most likely, you will lose your data. At least you have some availability issues. So, that's why it is important to have a backup strategy for your DynamoDB tables.

And for DynamoDB, you have the option to import or export the data to S3. That's great, but the S3 bucket, we will also live in the same region of your DynamoDB table, so in the case of an outage, if the entire AWS Region goes out, we will still have a problem.

So, thinking on that, we have cross-region replication available for DynamoDB tables. When you create a table, you have the option to import and export the data. You simply need to select the table and go on actions, and select either import or the export action. Both actions do take you to the same page, and what will happen is you will be creating a data pipeline, and this data pipeline, in the case of the export data to DynamoDB, you will read the data from your DynamoDB table and we will send that data to an Elastic MapReduce cluster. In the Elastic MapReduce cluster, we will write that data to an S3 bucket that you will specify.

As I said, no matter if you choose export or import data, you will be forwarded to the same page. The only difference is that you need to select the DynamoDB template that you want to use.

When you are dealing with cross-region replication, we will need to use the Amazon DynamoDB Cross-Region Replication Library. Cross-region replication is really great. It's great not only for disaster recovery scenarios, but also for high availability, because that will keep DynamoDB tables in sync across multiple regions in near real time. So, it's super fast and you can increase your availability and you can increase your user experience. You can keep the data closer to your users, and the users will be able to access your application faster.

In order to enable cross-regional replication, you need to check the read capacity on your source table and the write capacity on your destination table. I don't know if you do recall or not, but every time you create a new table, you need to provision the throughput capacity, and you need to provision a read capacity and a write capacity, and that's exactly what we are talking about in here. You need to check the read capacity on your source table and the write capacity on your destination table.

What that means is, let's say that we have a replication going on in this direction. So, we will be reading data from these tables, so we need to provision the right amount of read capacity in this table, and we will be writing data in these tables, so we need to provision enough write capacity on this table to handle this replication. And if you want to have a replication going on both ways, you need to do this on both tables. So, if you also want to have a replication going on in this direction, you also need to choose the proper write capacity in here and the proper read capacity in here.

That's something that is not really complicated to do, but I won't show you how to calculate the read and write capacity that you need, because that is not part of the SysOps exam.

About the Author

Students14278
Labs11
Courses6

Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.