image
DynamoDB Key Features
Start course
Difficulty
Intermediate
Duration
3h 3m
Students
1697
Ratings
4.7/5
Description

This course provides detail on the AWS Database services relevant to the AWS Certified Developer - Associate exam. This includes Amazon RDS, Aurora, DynamoDB, MemoryDB for Redis, and ElastiCache.

Want more? Try a lab playground or do a Lab Challenge!

Learning Objectives

  • Obtain a solid understanding of the following Amazon database services: Amazon RDS, Aurora, DynamoDB, MemoryDB for Redis, and ElastiCache
  • Create an Amazon RDS database
  • Create a DynamoDB database
  • Create an ElastiCache cluster
Transcript

Hello and welcome to this lecture, where I’ll be talking about some of the key features and characteristics of this database. Generally, if you hear DynamoDB mentioned, you’ll hear it mentioned in the same breath as two other phrases. 

  1. Highly available and durable

  2. Infinitely Scalable and fast

Let’s go through each one. 

DynamoDB is designed to be highly available by default. Your data is automatically replicated across three different Availability Zones within a geographic Region. In the case of an outage or an incident affecting an entire hosting facility, DynamoDB will route around the affected Availability Zone, continuing to provide resiliency. 

The replication of your data usually happens quickly, in milliseconds, but sometimes it can take longer. This means that your queries may return older versions of data before the most recent copy is fully replicated across the AZs. This is known as eventual consistency, and it is the default mode of DynamoDB. 

However, there may be some workloads that require strong consistency, to ensure that you’re always receiving the most up-to-date information from your database. In these cases, with every read request to your database, you can specify whether you’d prefer to use strong or eventual consistency. If you do use strong consistency, you will always receive the newest version of your data, however, those reads may take a small performance hit. 

DynamoDB also supports transactions as well, if your workload requires ACID compliance, which is atomicity, consistency, isolation, and durability. This is basically a way of saying “here’s a set of operations I’d like to do on the database. I want either all of them to happen or none of them to happen.” This is very important in use cases like banking, where if one operation fails, it can drastically affect the application and customer experience. 

So we know at this point DynamoDB is highly available by default since it replicates your data across three AZs, but you might require even greater levels of resiliency and performance. For these use cases, DynamoDB offers Global Tables. Global Tables enables you to replicate your data across Regions. To do this, the service sets up replica tables in the Regions that you choose. It’s as easy as pushing a button and the service will set up the tables on your behalf and synchronize your data. These replicas are active-active, meaning you can write to any table, and read from whatever table is closest to you. This is great for use cases where you need very high performance and require low latency reads and writes to tables that are close to several locations. 

In terms of other data durability features, DynamoDB also has backup functionality. Backups are a little different with DynamoDB. Because of its high availability configuration, you’re not worried about losing your server or the data on your server. However, you may want to take backups for compliance purposes or to ensure that you can roll back to a positive state. There are two forms of backups with DynamoDB: 

  1. The first option is on-demand backups. You push a button any time you’d like and it creates a full backup of your data. This is mostly used for compliance and archiving purposes.

  2. The second option is point-in-time recovery. This will enable you to go back in time to a database state at any second in the last 35 days. This is best used in cases where a mistake is made and you want to go back in time to before that mistake happened.

You can use one or both of these options or if you’d prefer a centralized backup interface for all of your different databases, DynamoDB also integrates with AWS Backup. 

On to the next key phrase with DynamoDB - it is “infinitely scalable and fast”. The service essentially does not have an upper limit to how large a table can scale to. And regardless of how large the table is, DynamoDB provides fast performance. Unlike a relational database, which can slow down as the table gets bigger, DynamoDB performance is constant and stays consistent even with tables that are many terabytes large.

So, when do you use DynamoDB? 

AWS published a very helpful blog called “How to determine if Amazon DynamoDB is appropriate for your needs”. I’ll let you read the full article - but I’ll summarize the main key points. 

This database is best used for OLTP workloads that need high scalability and data durability. It’s typically recommended if you’re developing a new application, especially if it’s a serverless application. While migrating a legacy application to use DynamoDB is possible, it might not always be worthwhile given the time and effort required. 

It’s not recommended for OLAP workloads or for applications that require ad-hoc query access. A relational database would be better suited for both of those workload types. You’ll see DynamoDB commonly used in gaming applications, to provide storage for leaderboards, in e-commerce applications for storing shopping cart information and user profile data and even in the transportation industries to store GPS data for ride shares. 

Any use case where you require a fully managed OLTP database that is infinitely scalable and serverless, where you have a solid understanding of your access patterns is a great choice for DynamoDB. That’s it for this one - I’ll see you for the next one! 

About the Author
Students
237782
Labs
1
Courses
232
Learning Paths
187

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.