How to Use Consistency Models for Amazon Web Services

If you’re interested in learning how consistency models on AWS can help you write stable, reliable applications, then this is the article for you. By following a consistency model, your application’s memory will remain consistent and the results of any operations on its memory should be predictable. (Editor’s Note: This is complex material. If you’d like to brush up on your understanding of storage in AWS, check out this course.)

Consistency Models Create Structure and Rules around Memory to Ensure Application Reliability

In very simple terms,  consistency models define rules for the order and visibility of read and updates.

Distributed systems are large and replicated across many servers, allow concurrent execution of components, are prone to failure, experience transaction delays, and have no global time. Objects in a distributed storage system are replicated to avoid single-point failures and improve both reliability and availability to avoid overload of transactions in a single system and to give faster access to local copies to avoid communication delay.

But all these virtues of a distributed system come at a price as multiple copies of data need to be kept identical. This requirement brought the necessities of a suitable consistency model for different distributed services such as storage, memory, or a NoSQL offering.

Broadly speaking, there are two types of consistency models: Data-centric and client-centric. Let’s take a look at both of them.

Data-Centric Consistency Models

Tanenbaum & Maarten Van Steen, two computer scientists who are experts in this field, define the consistency model as a contract between the software (processes) and memory implementation (data store). This model guarantees that if the software follows certain rules, the memory works correctly. Since, in a system without a global clock, defining the last operation writes is difficult, some restrictions should be applied on the values that can be returned by a read operation.

The following models are the data-centric consistency models according to their strictness in descending order – the strictest models are listed first:

ModelsDescription
 Strict ConsistencyAbsolute time ordering of all shared accesses matters
 Linearizability ConsistencyAll processes must see all shared accesses in the same order.  Accesses are furthermore ordered according to a (non-unique) global timestamp
Sequential ConsistencyAll processes see all shared accesses in the same order.  Accesses are not ordered in time
 Causal ConsistencyAll processes see causally-related shared accesses in the same order.
 FIFO ConsistencyAll processes see writes from each other in the order they were used.  Writes from different processes may not always be seen in that order
 Weak ConsistencyShared data can be counted on to be consistent only after a synchronization is done
 Release ConsistencyShared data are made consistent when a critical region is exited
 Entry ConsistencyShared data pertaining to a critical region are made consistent when a critical region is entered.

Client-Centric Consistency Models

In a client-centric consistency model, the emphasis is put on how data is seen by the clients. The data can be varying from clients to clients if data replication is not complete. Faster data access is the primary concern, so we might opt for a less-strict consistency model such as eventual consistency.

Eventual Consistency

In this approach,the system informally ensures that, if no new updates are made to a particular piece of data, eventually all reads to that item will return the last updated value. The updated replicas send the update messages to all other replicas. In these states different replicas could return different values if queried, but eventually all the replicas get the update and will be consistent. This model is suitable for hundreds of thousands of concurrent reads are writes per second such as Twitter updates, Instagram photo uploads, Facebook status pages, messaging systems, and so on where data integrity concern is not paramount.

Read-Your-Write Consistency

RYW (Read-Your-Writes) consistency is achieved when the system guarantees that, once a record has been updated, any attempt to read the record will return the updated value. RDBMS generally gives read-your-write consistency.

Read-after-Write Consistency

Read-after-write consistency is stricter than eventual consistency. A newly inserted data item or record will be immediately visible to all the clients. Please note that it is only applicable to new data. Updates and deletions are not considered in this model.

Amazon S3 Consistency Models

Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket and eventual consistency for overwrite PUTS and DELETES in all regions.  So, if you add a new object to your bucket, you and your clients will see it. But, if you overwrite an object, it might take some time to update its replicas – hence the eventual consistency model is applied.

Amazon S3 guarantees high-availability by replicating data across many servers and AZs. It is obvious that data integrity should be maintained if a new record is added or a record/data is updated and deleted. The scenarios for above cases are as follows:

  • A new PUT request is made. The object might not appear in the list if queried immediately until the changes are propagated to all the servers and AZs. The read-after-write consistency model is applied here.
  • An UPDATE request is made. As eventual consistency model is applied for UPDATEs, a query to list the object might return an old value.
  • A DELETE request is made. As eventual consistency model is applied for DELETEs, a query to list or read the object might return the deleted object.

Amazon DynamoDB Consistency Models

Amazon DynamoDB is one of the most popular NoSQL service from AWS. NoSQL storage is inherently distributed. To enable high availability and data durability, Amazon DynamoDB stores three geographically distributed replicas of each table. A write operation in DynamoDB adheres to eventual consistency. A read operation (GetItem, BatchGetItem, Query or Scan operations) on DyanamoDB table is eventual consistent read by default. But, you can configure a strong consistent read request for the most recent data. Note that a strong consistent read operation consumes twice the read units than eventual consistent read request. In general, it is advised to follow eventual consistent read because the change propagation in DynamoDB is very fast (DynamoDB uses SSDs for low-latency) and you will get the same result with the half of the cost of a strong read consistent request.

Conclusion

Phew! That was a lot of information. I hope you now have at least some idea about the different types of consistency models. AWS’s distributed paradigm means its services have to adopt consistency models which best suits the performance and consistency of data or objects.

Want to learn more? Try Cloud Academy for free for 7-days. Here are a few courses and learning paths that might interest you:

You’ll learn everything you need to know to successfully develop reliable and dependable AWS applications – as well as pass AWS certification exams on the first try. We look forward to working together with you to upgrade your career!

Written by

Cloud Computing and Big Data professional with 10 years of experience in pre-sales, architecture, design, build and troubleshooting with best engineering practices.Specialities: Cloud Computing - AWS, DevOps(Chef), Hadoop Ecosystem, Storm & Kafka, ELK Stack, NoSQL, Java, Spring, Hibernate, Web Service

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless