This is a guest post from 47Line Technologies.
In Data Versioning with DynamoDB – An inside look into NoSQL, Part 5, we spoke about data versioning, the 2 reconciliation strategies and how vector clocks are used for the same. In this article, we will talk about Handling Failures.
Handling Failures – Hinted Handoff
Even under the simplest of failure conditions, DynamoDB would experience reduced durability and availability if the traditional form of quorum approach is used. In order to overcome this it uses a sloppy quorum; all
WRITE operations are performed on the first N healthy nodes from the preference list, which may not be the first N nodes encountered by traversing the consistent hashing ring.
Partitioning & Replication of Keys in Dynamo
Consider the above figure: If
A is temporarily not reachable during a
WRITE operation, then the replica that would have lived on
A will be sent to
D to maintain the desired availability and durability guarantees. The replica sent to
D will have a hint in its metadata which tells who the intended recipient was (in our case, it is
The hinted replicas are stored in a separate local database which is scanned periodically to detect if
A has recovered. Upon detection,
D will send the replica to
A and may delete the object from its local store without decreasing the total number of replicas. Using hinted handoff, DynamoDB ensures that
WRITE are successful even during temporary node or network failures.
Highly available storage systems must handle failures of an entire data center. DynamoDB is configured such that each object is replicated across data centers. In terms of the implementation detail, the preference list of a key is constructed such that the storage nodes are spread across multiple data centers and these centers are connected via high-speed network links.
Handling Permanent Failures – Replica Synchronization
Hinted Handoff is used to handle transient failures. What if the hinted replica itself becomes unavailable? To handle such situations, DynamoDB implements an anti-entropy protocol to keep the replicas synchronized. A Merkle Tree is used for the purpose of inconsistency detection and minimizing the amount of data transferred.
According to Wikipedia, a “Merkle tree is a tree in which every non-leaf node is labeled with the hash of the labels of its children nodes.”. Parent nodes higher in the tree are hashes of their respective children.
The principal advantage of a Merkle tree is that each branch of the tree can be checked independently without requiring nodes to download the entire tree or the entire data set. Moreover, Merkle trees help in reducing the amount of data that needs to be transferred while checking for inconsistencies among replicas. For instance, if the hash values of the root of two trees are equal, then the values of the leaf nodes in the tree are equal and the nodes require no synchronization. If not, it implies that the values of some replicas are different. In such cases, the nodes may exchange the hash values of children and the process continues until it reaches the leaves of the trees, at which point the hosts can identify the keys that are “out of sync”.
DynamoDB uses Merkle trees for anti-entropy as follows: Each node maintains a separate Merkle tree for each key range (the set of keys covered by a virtual node) it hosts. This allows nodes to compare whether the keys within a key range are up-to-date. In this scheme, two nodes exchange the root of the Merkle tree corresponding to the key ranges that they host in common. Subsequently, using the tree traversal scheme described above the nodes determine if they have any differences and perform the appropriate synchronization action. The disadvantage with this scheme is that many key ranges change when a node joins or leaves the system thereby requiring the tree(s) to be recalculated.
In the next and the final part of this series, we will discuss Membership and Failure Detection.
Amazon DynamoDB: 10 Things You Should Know
Amazon DynamoDB is a managed NoSQL service with strong consistency and predictable performance that shields users from the complexities of manual setup. Whether or not you've actually used a NoSQL data store yourself, it's probably a good idea to make sure you fully understand the key ...
Inside the Cloud – Episode 2: Amazon DynamoDB, Redshift, and RDS
It’s all about Amazon Web Services databases in our second episode of Inside the Cloud! In case you missed the announcement earlier this month, Inside the Cloud is our new video series that helps you stay on top of the latest news from Amazon Web Services, Microsoft Azure, Google Clo...
Monitoring DynamoDB with CloudWatch
DynamoDB and Cloudwatch monitoring: Amazon Web Services recently introduced a feature to integrate its DynamoDB and CloudWatch components. This feature will allow you to collect and analyze performance metrics. In this post, we'll cover everything you need to know to get started using t...
Working with Amazon DynamoDB: New Course
A fantastic new course from an exciting new instructor We proudly announce a new course Working with Amazon DynamoDB from a new instructor, Ryan Park. Ryan has the honor of acting as an AWS Community Hero. AWS describes their Community Heroes as: Mentors and super users. They are crea...
The DynamoDB-Caused AWS Outage: What We Have Learned
Over the course of a few hours this past September 20, some of the Internet's most popular sites like Netflix, Airbnb, and IMDb - along with other AWS customers - suffered major latency and even some outages. The proximate cause? Amazon's Status dashboard told the story of this AWS outa...
Amazon Introducing Some Interesting New DynamoDB Features
DynamoDB is a managed NoSQL service in the AWS family. Both the key-value and the document data model are available, and other DynamoDB features include the usual auto scalability and high availability of each AWS service, and also excellent integration with other AWS services like MapR...
Membership and Failure Detection in DynamoDB: An Inside Look Into NoSQL, Part 7
This is a guest post from 47Line Technologies. In our previous post, How to handle failures in DynamoDB – An inside look into NoSQL, we discussed handling failures via Hinted Handoff & Replica Synchronization. We also talked about the advantages of using a Sloppy Quorum and Merkl...
Data Versioning With DynamoDB – NoSQL, Part 5
In DynamoDB: Replication and Partitioning – Part 4, we talked about partitioning and replication in detail. We introduced consistent hashing, virtual nodes and the concept of coordinator nodes and preference list. In this article, we will discuss Data Versioning with DynamoDB. Data Ver...
DynamoDB: Replication and Partitioning – Part 4
In our previous post on DynamoDB, An Inside Look Into NoSQL, we mentioned the various distributed techniques used while architecting NoSQL data stores. A table nicely summarized these techniques and their advantages. In this article, we will go into the details of partitioning and repli...
DynamoDB: An Inside Look Into NoSQL – Part 3
This is a guest post from 47Line Technologies. In our previous post 'DynamoDB: An Inside Look Into NoSQL', we looked at Design Considerations of NoSQL and introduced the concept of eventual consistency. In this article, we will introduce the concepts and techniques used while archite...
DynamoDB: An Inside Look Into NoSQL – Part 2
In our previous post, DynamoDB: An Inside Look Into NoSQL, we introduced you to NoSQL, spoke about the CAP theorem and certain assumptions that need to be made while designing NoSQL data stores. Let's dive deeper! Design Considerations Traditional commercial systems and applications p...
DynamoDB: An Inside Look Into NoSQL – Part 1
This is a guest post from 47Line Technologies. In our earlier posts (Big Data: Getting Started with Hadoop, Sqoop & Hive and Hadoop Learning Series – Hive, Flume, HDFS, and Retail Analysis), we introduced the Hadoop ecosystem & explained its various components using a real-wo...