Skip to main content

Amazon DynamoDB: Ten Things You Really Should Know

Amazon DynamoDB is a managed NoSQL service with strong consistency and predictable performance that shields users from the complexities of manual setup.

Whether or not you’ve actually used a NoSQL data store yourself, it’s probably a good idea to make sure you fully understand the key design differences between NoSQL (including Amazon DynamoDB) and the more traditional relational database (or “SQL”) systems like MySQL.

First of all, NoSQL does not stand for “Not SQL“, but “Not Only SQL“. The two are not opposites, but complementary. NoSQL designs deliver faster data operations and can seem more intuitive, while not necessarily adhering to the ACID (atomicity, consistency, isolation, and durability) properties of a relational database.

There are many well-known NoSQL databases available, including MongoDB, Cassandra, HBase, Redis, Amazon DynamoDB, and Riak. Each of those was built for a specific range of uses and will therefore offer different features. We could group those databases into columnar (Cassandra, HBase), key-value store (DynamoDB, Riak), document-store (MongoDB, CouchDB), and graph (Neo4j, OrientDB) categories.

In this post, I’m going to focus on Amazon DynamoDB – the giant of the NoSQL world. I believe it’s become a giant because Amazon.com built it for their own operations. Considering how much was at stake financially, anything less than complete reliability would simply not be tolerated. Software created in such a demanding environment and using the resources of an Amazon.com is bound to be epic. The result? Fantastic reliability and durability, and blazing fast service.

Like any other AWS product, Amazon DynamoDB was designed for failure (i.e., it has self-recovery and resilience built in). That makes DynamoDB a highly available, scalable, and distributed data store. Here are ten key features that helped make Amazon DynamoDB into a giant.

1. Amazon DynamoDB is a managed, NoSQL database service:

With a managed service, users only interact with the running application itself. You don’t need to worry about things like server health, storage, and network connectivity. With Amazon DynamoDB, AWS provisions and runs the infrastructure for you. Some of DynamoDB’s critical managed infrastructure features include:

  • Automatic data replication over three availability-zones in a single region.
  • Infinitely scalable read-write I/O running on IOPS-optimized solid state drives.
  • A provisioned-throughput model where read and write units can be adjusted at any time based on actual application usage.
  • Data backed up to S3.
  • Integrated with other AWS services like Elastic MapReduce (EMR), Data Pipeline, and Kinesis.
  • Pay-per-use model – you never pay for hardware or services you’re not actually using.
  • Security and access control can be applied using Amazon’s IAM service.

2. Amazon DynamoDB has Predictable Performance:

AWS claims that DynamoDB will deliver highly predictable performance. Considering Amazon’s reputation for service delivery, we tend to take them at their word on this one. You can actually control the quality of the service you’ll get by choosing between Strong Consistency (Read-after-Write) or Eventual Consistency. Similarly, if a user wants to increase or decrease the Read/Write throughput they’ll experience, they can do it through simple API calls. Amazon DynamoDB also offers what they call Provisioned Capacity, where you can “bank” up to five minutes of unused capacity, which, like the funds in an emergency bank account, you can use during short bursts of activity.

3. Amazon DynamoDB is designed for massive scalability

Being an AWS product, you can assume that Amazon DynamoDB will be extremely scalable. With their automatic partitioning model, as data volumes grow, DynamoDB invisibly spreads the data across partitions and raises throughput. This requires no intervention from the user.

4. Amazon DynamoDB data types

DynamoDB supports following data types:

  • Scalar – Number, String, Binary, Boolean, and Null.
  • Multi-valued – String Set, Number Set, and Binary Set.
  • Document – List and Map.

Scalar types are generally well understood. We’ll focus instead on multi-valued and document types. Multi-valued types are sets, which means that the values in this data type are unique. For a months attribute you can choose a String Set with the names of all twelve months – each of which is, of course, unique.
Similarly, document types are meant for representing complex data structures in the form of Lists and Maps. See this example:

{
   Id = 100
   ProductName = "K3 Note"
   Description = "5.5 inches screen, 4G LTE,octa-core processor, 2GB RAM and 16 GB ROM"
   MobileType = "Touch"
   Brand = "Lenovo"
   Price = 100
   Color = [ "White", "Black" ]
   ProductCategory = "Mobile"
}

5. Amazon DynamoDB’s Data Model:

DynamoDB uses three basic data model units, Tables, Items, and Attributes. Tables are collections of Items, and Items are collections of Attributes.
Attributes are basic units of information, like key-value pairs. Tables are like tables in relational databases, except that in DynamoDB, tables do not have fixed schemas associated with them. Items are like rows in an RDBMS table, except that DynamoDB requires a Primary Key. The Primary Key in DynamoDB must be unique so that it can find the exact item in the table. DynamoDB supports two kinds of Primary Keys:

  • Hash Type Primary Key: If an attribute uniquely identifies an item, it can be considered as Primary. DynamoDB builds a hash index on the attribute to facilitate the uniqueness. A Hash Key is mandatory in a DynamoDB table.
  • Hash and Range Type Primary Key: This type of Primary Key is built upon the hashed key and the range key in the table: a hashed index on the hash primary key attribute, and a range sort index on the range primary key attribute. This type of primary key allows for AWS’s rich query capabilities.

6. Amazon DynamoDB indexes

There are two types of indexes in DynamoDB, a Local Secondary Index (LSI) and a Global Secondary Index (GSI). In an LSI, a range key is mandatory, while for a GSI you can have either a hash key or a hash+range key. GSIs span multiple partitions and are placed in separate tables. DynamoDB supports up to five GSIs. While creating a GSI, you need to carefully choose your hash key because that key will be used for partitioning.

Which is the right index type to use? Here are two considerations: LSIs limit item size to 10 GB, and GSIs offer only eventual consistency.

7. Amazon DynamoDB partitions

In DynamoDB, data is partitioned automatically by its hash key. That’s why you will need to choose a hash key if you’re implementing a GSI. The partitioning logic depends upon two things: table size and throughput.

Amazon DynamoDB - partitions

The partition for a table is calculated by DynamoDB. Although it is transparent to users, you should understand the logic behind this.
Amazon DynamoDB - calc
(Note: Read Capacity Units – RCU – are measured in 4KB/sec. Write Capacity Units – WCU – are measured in 1KB/sec.)
According to this formula, if we have a table size of 16 GB and we have 6000 RCUs and 1000 WCUs, then:

# of partitions by throughput: 6000/3000+1000/1000 = 3

# of partitions by size: 16/10 = 1.6

So, the # of partitions in total: max(1.6, 3) = 3

Therefore, we will require three partitions. The RCUs and WCUs will be uniformly distributed across the partitions. Here, RCUs per partition will be 3000/3 = 1000. RCUs and the WCUs will be 1000/3 = 333 WCUs. The data per partition will be 16/3 = 5.4 GB per partitions.

8. Amazon DynamoDB streams

DynamoDB streams are like transactional logs for a table. According to the DynamoDB Developer’s Guide:

A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.

Streams are applied only to tables, and each stream record appears exactly once in a stream. AWS maintains separate endpoints for DynamoDB and DynamoDB streams. There are all kinds of scenarios where streams can be useful like, for instance, a messaging application where a message or picture that is updated to a group must be reflected in the message boxes of all the group members. Or for sending welcome messages to new customers when they sign up for your service.

9. Amazon DynamoDB integration with Amazon EMR and Redshift

NoSQL and Big Data technologies are often discussed together, because they both share the same distributed and horizontally scalable architecture, and both aim to provide high volume, structured, and semi-structured data processing. In a typical scenario, Elastic MapReduce (EMR) performs its complex analysis on datasets stored on DynamoDB. Users will often also use AWS Redshift for data warehousing, where BI tasks are carried out on data loaded from DynamoDB tables to Redshift.

10. Amazon DynamoDB JavaScript Web Shell

AWS has introduced a web-based user interface known as the DynamoDB JavaScript Shell for local development. You can download the tool (.zip) for windows here and for *nix systems (.tar.gz) here. You’ll need java 1.6.x or higher to run this tool.
Steps:

  • Download the file.
  • Extract it to a local folder. In my case, I saved it to C:\AWS_Tools
  • Run following command:
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar

Amazon DynamoDB - output

  • Access the console in a browser with the URL: http://localhost:8000/shell

The web page will look like this:

Amazon DynamoDB - interface

  • Click on the button to get some sample commands 5

For example the createTable API will run:

Amazon DynamoDB - createTable API

  • After running this, listTables will show you:

Amazon DynamoDB - listTables
This is a great tool to perform syntax checking before actually going to production.

With DynamoDB, Amazon has done a great job providing a NoSQL service with strong consistency and predictable performance, while saving users from the complexities of a distributed system. One proof of their success is the many systems (like Riak) that chose to build on the DynamoDB design. With a strong ecosystem, Amazon DynamoDB is something to consider when you are building your next Internet-based scale application.

Ready to try it for yourself? Why not use Cloud Academy’s AWS DynamoDB hands-on lab?

 

Written by

Cloud Computing and Big Data professional with 10 years of experience in pre-sales, architecture, design, build and troubleshooting with best engineering practices.Specialities: Cloud Computing - AWS, DevOps(Chef), Hadoop Ecosystem, Storm & Kafka, ELK Stack, NoSQL, Java, Spring, Hibernate, Web Service

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless