This section of the AWS Certified Solutions Architect - Professional learning path introduces you to the AWS database services relevant to the SAP-C02 exam. We then understand the service options available and learn how to select and apply AWS database services to meet specific design scenarios relevant to the AWS Certified Solutions Architect - Professional exam.
Want more? Try a Lab Playground or do a Lab Challenge!
- Understand the various database services that can be used when building cloud solutions on AWS
- Learn how to build databases using Amazon RDS, DynamoDB, Redshift, DocumentDB, Keyspaces, and QLDB
- Learn how to create ElastiCache and Neptune clusters
- Understand which AWS database service to choose based on your requirements
- Discover how to use automation to deploy databases in AWS
- Learn about data lakes and how to build a data lake in AWS
Hello and welcome to this lecture, where I’ll be talking about some of the key features and characteristics of this database. Generally, if you hear DynamoDB mentioned, you’ll hear it mentioned in the same breath as two other phrases.
Highly available and durable
Infinitely Scalable and fast
Let’s go through each one.
DynamoDB is designed to be highly available by default. Your data is automatically replicated across three different Availability Zones within a geographic Region. In the case of an outage or an incident affecting an entire hosting facility, DynamoDB will route around the affected Availability Zone, continuing to provide resiliency.
The replication of your data usually happens quickly, in milliseconds, but sometimes it can take longer. This means that your queries may return older versions of data before the most recent copy is fully replicated across the AZs. This is known as eventual consistency, and it is the default mode of DynamoDB.
However, there may be some workloads that require strong consistency, to ensure that you’re always receiving the most up-to-date information from your database. In these cases, with every read request to your database, you can specify whether you’d prefer to use strong or eventual consistency. If you do use strong consistency, you will always receive the newest version of your data, however, those reads may take a small performance hit.
DynamoDB also supports transactions as well, if your workload requires ACID compliance, which is atomicity, consistency, isolation, and durability. This is basically a way of saying “here’s a set of operations I’d like to do on the database. I want either all of them to happen or none of them to happen.” This is very important in use cases like banking, where if one operation fails, it can drastically affect the application and customer experience.
So we know at this point DynamoDB is highly available by default since it replicates your data across three AZs, but you might require even greater levels of resiliency and performance. For these use cases, DynamoDB offers Global Tables. Global Tables enables you to replicate your data across Regions. To do this, the service sets up replica tables in the Regions that you choose. It’s as easy as pushing a button and the service will set up the tables on your behalf and synchronize your data. These replicas are active-active, meaning you can write to any table, and read from whatever table is closest to you. This is great for use cases where you need very high performance and require low latency reads and writes to tables that are close to several locations.
In terms of other data durability features, DynamoDB also has backup functionality. Backups are a little different with DynamoDB. Because of its high availability configuration, you’re not worried about losing your server or the data on your server. However, you may want to take backups for compliance purposes or to ensure that you can roll back to a positive state. There are two forms of backups with DynamoDB:
The first option is on-demand backups. You push a button any time you’d like and it creates a full backup of your data. This is mostly used for compliance and archiving purposes.
The second option is point-in-time recovery. This will enable you to go back in time to a database state at any second in the last 35 days. This is best used in cases where a mistake is made and you want to go back in time to before that mistake happened.
You can use one or both of these options or if you’d prefer a centralized backup interface for all of your different databases, DynamoDB also integrates with AWS Backup.
On to the next key phrase with DynamoDB - it is “infinitely scalable and fast”. The service essentially does not have an upper limit to how large a table can scale to. And regardless of how large the table is, DynamoDB provides fast performance. Unlike a relational database, which can slow down as the table gets bigger, DynamoDB performance is constant and stays consistent even with tables that are many terabytes large.
So, when do you use DynamoDB?
AWS published a very helpful blog called “How to determine if Amazon DynamoDB is appropriate for your needs”. I’ll let you read the full article - but I’ll summarize the main key points.
This database is best used for OLTP workloads that need high scalability and data durability. It’s typically recommended if you’re developing a new application, especially if it’s a serverless application. While migrating a legacy application to use DynamoDB is possible, it might not always be worthwhile given the time and effort required.
It’s not recommended for OLAP workloads or for applications that require ad-hoc query access. A relational database would be better suited for both of those workload types. You’ll see DynamoDB commonly used in gaming applications, to provide storage for leaderboards, in e-commerce applications for storing shopping cart information and user profile data and even in the transportation industries to store GPS data for ride shares.
Any use case where you require a fully managed OLTP database that is infinitely scalable and serverless, where you have a solid understanding of your access patterns is a great choice for DynamoDB. That’s it for this one - I’ll see you for the next one!
Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.