1. Home
  2. Training Library
  3. Designing for Disaster Recovery & High availability in AWS - Level 2

Real Time messaging and Kinesis Data Streams

Let’s begin by considering the idea of real-time processing.   Real-time describes a performance level in computing where the data transfers and processing must complete in a very short time for the results to be meaningful.  The stock market operates in real time when it comes to publishing the prices for stocks in order for traders to take action accordingly.   

The wide variety of existing devices in the space of “The Internet of things” consistently send data to be ingested and processed in order for actionable results to be possible.  Think of a home surveillance system sending an alert that a motion detector was activated inside the house while you are on vacation.  It would be meaningless for this alert to be useful if you or the responding agency does not get the alert as close to immediately as possible.  

These types of real-time functionality have become more popular and necessary as compute systems and solutions continue to evolve.  

Streaming applications  are a common use case where the speed of data transfers have a direct impact on the customer experience.  It’s hard to imagine keeping a customers’ attention if the music or video being streamed does not play smoothly.  

Both the velocity and volume of data transfer become critical factors in real-time data processing applications.  Traditional messaging as implemented by Queues, Topics or a combination of them can rarely implement data transfers in real time much less at high volumes of data transfers bigger than 256KB.  

For this type of application we need a new and a different kind of messaging system that allows for large data sets to be ingested and stored.  That’s the purpose of Amazon Kinesis Data Streams

Kinesis Data Streams represents a real time data collection messaging service that maintains a copy of all the data received in the order received for 24 hours by default and up to 365 days (8760 hours) if configured accordingly by using the IncreaseStreamRetentionPeriod and DecreaseStreamRetentionPeriod operations.  The retention period is specified in hours and you can find out the current retention period by using the DescribeStream operation. 

Kinesis Data Streams enables the processing of large data sets in real-time and the ability to read and replay records to multiple consumer applications. 

A stream is made of one or more “Shards” and each shard stores data records in sequence.  Data records are built with a partition key, a sequence number, and the actual data up to 1MB.  

Producers put data to Kinesis Data Streams and Consumers process the data by using the Kinesis Client Library.  Consumers can be applications running on EC2 instance fleets and in this case the Kinesis Client Library is compiled into your application and enables a decoupled, fault tolerant and load balanced consumption of records from the stream. The Kinesis Client Library is different from the Kinesis Data Streams API available in the AWS SDK. The Kinesis Client Library ensures that for every shard there is a record processor running and processing the shard. This simplifies the reading of data from a stream by decoupling your record processing logic from the connection and reading of the data stream

It is worth noting that the Kinesis Client Library uses a DynamoDB table to store control data and it creates one table per application that is processing data from a stream.  The library can be run on EC2 instances, Elastic Beanstalk and even your own data center servers.  

Kinesis Data Streams can automatically encrypt data as a producer delivers it into a stream. It uses AWS KMS master keys for encryption. 

This is applicable to collect and gather data in real time like application log data, social media data, Real time game dashboards and leader boards, stock market data feeds, clickstream data from websites and many of the applications for processing data for the internet of things. 

The time for a record to be put into a stream and be available for pick up is called the (put-to-get latency) and in Kinesis Data streams it’s less than one second.  A kinesis data streams application can start consuming data from the stream almost immediately after data starts arriving.   

Kinesis data Streams are made of one or more “Shards” for writing and ingesting data from producers. A shard can handle 1,000 records per second.  The data capacity of a stream is proportional to the number of shards being used and with enough shards you can collect gigabytes of data per second from tens of thousands of sources. 

A shard is a sequence of records in a stream. A stream represents one or more shards each of which has a fixed capacity.  Each shard can be written at a rate of 1000 records per second up to a maximum write of 1MB per second.   For reading, a shard can sustain the rate of 5 transactions per second up to a maximum of 2MB per second.  Increasing the data rate for a stream is a matter of increasing the number of shards in it. 

Inside a shard data is written as records and each record can be up to 1MB.  The anatomy of a record consists of three parts:

1) A partition key that is used to group data in a shard in a stream.   The partition key defines the shard the record belongs to.  

2) A sequence number that is unique per partition-key within a shard.  Sequence numbers for the same partition key help maintain the order of arrival for records.  Sequence numbers increase over time for the same partition key. 

3) The actual data for the record up to 1MB.  It’s important to note that the actual data in a record is not inspected, interpreted or changed in any way by Kinesis Data Streams.  It’s up to the consumer applications to perform these operations if needed.  

The capacity of a shard in Kinesis Data Streams can be configured as “on demand’ capacity mode and “provisioned” capacity mode for your stream.  Using “on demand” capacity KDS automatically manages and accommodates the number of shards in order to provide the throughput needed by your workload. Throughput is adjusted up or down according to the needs of your application.  You are billed for the throughput you actually use.  

In provisioned capacity mode you need to specify the number of shards for the data stream.  You can increase or decrease the number of shards in a data stream as needed and you are billed for the number of shards at an hourly rate.  The operations for re-sharding include splitting shards and merging shards to increase and decrease the number of shards as needed. Keep in mind that the total capacity of the stream is the sum of the capacities of all shards used.  

In short, Kinesis Data Streams allows real-time processing of streaming big data and the ability to read and replay records to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications that read from the same Amazon Kinesis stream for the purpose of counting, aggregating, and filtering.

Difficulty
Intermediate
Duration
1h
Students
5
Description

This course covers the core learning objective to meet the requirements of the 'Designing for disaster recovery & high availability in AWS - Level 2' skill

Learning Objectives:

  • Analyze the amount of resources required to implement a fault-tolerant architecture across multiple AWS availability Zones
  • Evaluate an effective AWS disaster recovery strategy to meet specific business requirements
  • Understand SLA for AWS services to ensure the high availability of a given AWS solution
  • Analyze which AWS services can be leveraged to implement a decoupled solution
About the Author
Students
225572
Labs
1
Courses
215
Learning Paths
175

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.