Amazon Kinesis: Managed Real-Time Event Processing

As Big Data evolves, more tools and technologies aimed at helping enterprises cope are coming on line. Live data needs special attention, because delayed processing can effect its value: a twitter trend will attract more attention if it is associated with something going on right now; a logging system alert is only useful while the error still exists. To tame huge volumes of time-sensitive streaming data, AWS created Amazon Kinesis.

Amazon Kinesis is a fully managed, real-time, event-driven processing system that offers highly elastic, scalable infrastructure. It is designed to process massive amounts of real-time data generated from social media, logging systems, click streams, IoT devices, and more.

The open source Apache Kafka project actually shares some functionality with Amazon Kinesis. While Kafka is very fast (and free), it is still a bundled tool that needs installation, management, and configuration. If you would prefer to avoid the extra administrative burden and already have some AWS Cloud investment, then Kinesis may just be your new best friend.

Amazon Kinesis Architecture

Amazon Kinesis architecture building blocks(Amazon Kinesis building blocks)

Data Records

Data Records contain the information from a single event. A data record consists of a sequence number, a partition key, and a data blob.

  • Sequence numbers are created and signed by Kinesis. Event consumers process the records according to the order of the sequence number.
  • A partition key is an identifier chosen by event submitters to generate a hash key which will determine to which shard a data record belongs.
  • Data Blobs are actual payload objects containing content like log records, tweets, and RFID records. Data blobs do not have a particular format and can be as large as 50KB.

Streams

Streams are the core building block of the Amazon Kinesis service. Data records are written to streams by event producers and read by event consumers. Streams are composed of one or more shards, while Shards are a logical subset of data within a stream. Events in a stream are stored for 24 hours.

Kinesis is meant for real-time data processing – and in real-time events, a stale record possesses little value. Amazon Kinesis streams are identified by Amazon Resource Names (ARN).

Shards

Shards are the objects to which data records are written and consumed by event producers and event consumers. Each shard gets data records according to hash key ranges. Partition keys are taken by Kinesis from data records, formatted to 128-bit hash keys, and associated with a shard for a certain range.

The Kinesis user is responsible for shard allocation, and the number of shards determines the application throughput. According to AWS Kinesis documentation:

Each open shard can support up to 5 read transactions per second, up to a maximum total of 2 MB of data read per second. Each shard can support up to 1000 write transactions per second, up to a maximum total of 1 MB data written per second.

Shards are elastic in nature. You can increase or decrease the number of shards according to your load.

Kinesis Consumers

Kinesis consumers are typically Kinesis application runs on clusters of EC2 instances. A Kinesis consumer uses the Amazon Kinesis Client library to read data from streams’ shards. Actually, streams push data records to a Kinesis application.

When Kinesis applications are created, they are automatically assigned to a stream, and the stream, in turn, associates the consumers with one or more shards. Consumers perform only lighter tasks on data records before submitting them to AWS DynamoDB, AWS EMR, AWS S3, or even a different Kinesis stream for further processing.

Consider a real-life Kinesis example involving a Twitter application: in a Twitter data analysis application, tweets are data records, all tweets form the stream (i.e. Twitter Firehose). The tweets are segregated by topic so each topic name can be used as a partition key. All the tweets belong to set of Twitter topics that are grouped together to form a shard.

Kinesis Operations

Amazon kinesis supports the Java API only. The following operations are performed using the Kinesis client API:

Add Data Record to Stream

Producers call PutRecord to push data to a stream or to shards. Each record should be less than 50 KB. The user then creates a PutRecordRequest and passes {streamName, partitionKey, data} as input. You can also force a strict ordering of records by calling setSequenceNumberForOrdering and passing an incremental atomic number or sequence number of previous record.

Get Records from Shards

Retrieving records (up to 1 MB) from shards or streams requires a shard iterator. Create a GetRecordRequest object, and call the getRecords method by passing the GetRecordRequest object. Obtain the next shard iterator from getRecordsResult to make next call to getRecordResult.

Resharding Streams

Resharding a stream will split or merge shards to match the dynamic event flow to the Kinesis stream. Always split a shard into two shards or merge two shards into one in a single resharding operation. As AWS Kinesis bills you per shard, merging shards cuts your shard cost by half (while splitting doubles the cost). Resharding is an administrative process that can be triggered by CloudWatch monitoring metrics.

Kinesis Connectors

Amazon Kinesis offers three connector types: S3 Connector, Redshift Connector, and DynamoDB connector.

Kinesis Pricing Model

Amazon Kinesis uses a pay-as-you-go pricing model based on two factors: Shard Hours and PUT Payload Units.

  • Shard Hour. In Kinesis, a shard provides a capacity of 1MB/sec data input and 2MB/sec data output and can support up to 1000 records per second. Users are charged for each shard at an hourly rate. The number of shards depends on their throughput requirements.
  • PUT Payload Unit. PUT Payload Units are billed at a per million PUT Payload Units rate. In Kinesis, a unit of PUT payload is 25KB. So, for example, if your record size is 30KB, you are charged 2 PUT payload units. If your data record is 1 MB, you are charged for 40 PUT payload units.

In the AWS standard region, a shard hour currently costs $0.015. So, for example, let’s say that your producer produces 100 records per second and each data record is 50 KB. This would translate as a 5MB/second input to your Kinesis stream from the producer. As each shard supports 1 MB/sec input, we need 5 shards to process 5000 KB/second (as each shard supports 1000 KB/second). So our shard per hour cost will be $0.075 (0.015*5). 24 hours of processing would therefore cost us $1.80.

Moreover, we need 2 PUT Payload Units for each data record (1 PUT Payload Unit= 25 KB chunk). Again, we’re producing 100 records per second. We are charged 2000 PUT Payload Unit/second. In an hour we are charged 7200000 PUT Payload Unit. Hence we are charged 172800000 PUT Payload Unit per day. The cost will be $2.4192 (172800000/1000000 * 0.014).

So we will be charged a total of (1.8+2.4192) $4.2192 /day for our data processing.

A few Amazon Kinesis use cases:

  • Real-time data processing.
  • Application log processing.
  • Complex Direct Acyclic Graph (DAG) processing.

With the power of real-time data processing through a managed service from AWS, Amazon Kinesis is a perfect tool for storing and analyzing data from social media streams, website clickstreams, financial transactions logs, application or server logs, sensors, and much more.

To gain a better understanding of Amazon Kinesis and get started with building streamed solutions, tale a look at Cloud Academy’s intermediate course on Amazon Kinesis. 

Introduction to Amazon Kinesis library screenshot

Have you used Kinesis yet? Why not share your experience?

Avatar

Written by

Chandan Patra

Cloud Computing and Big Data professional with 10 years of experience in pre-sales, architecture, design, build and troubleshooting with best engineering practices. Specialities: Cloud Computing - AWS, DevOps(Chef), Hadoop Ecosystem, Storm & Kafka, ELK Stack, NoSQL, Java, Spring, Hibernate, Web Service


Related Posts

Amanda Cross
Amanda Cross
— April 9, 2021

New Content: Platforms, Programming, and DevOps – Something for Everyone

This month our team of expert certification specialists released three new or updated learning paths, 16 courses, 13 hands-on labs, and four lab challenges! New content on Cloud Academy You can always visit our Content Roadmap to see what’s just released as well as what’s coming soon....

Read more
  • alibaba
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • programming
  • Security
Luca Casartelli
Luca Casartelli
— March 31, 2021

Mastering AWS Organizations Service Control Policies

Service Control Policies (SCPs) are IAM-like policies to manage permissions in AWS Organizations. SCPs restrict the actions allowed for accounts within the organization making each one of them compliant with your guidelines. SCPs are not meant to grant permissions; you should consider ...

Read more
  • AWS
  • Organizations
  • SCP
Amanda Cross
Amanda Cross
— March 12, 2021

New Content: Focus on DevOps and Programming Content this Month

This month our team of expert certification specialists released 12 new or updated learning paths, 15 courses, 25 hands-on labs, and four lab challenges! New content on Cloud Academy You can always visit our Content Roadmap to see what’s just released as well as what’s coming soon. Ja...

Read more
  • alibaba
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • programming
Amanda Cross
Amanda Cross
— February 12, 2021

New Content: Get Ready for the CISM Cert Exam & Learn About Alibaba, Plus All the AWS, GCP, and Azure Courses You Know You Can Count On

This month our team of intrepid certification specialists released five learning paths, seven courses, 19 hands-on labs, and three lab challenges!  One particularly interesting new learning path is Certified Information Security Manager (CISM) Foundations. After completing this learn...

Read more
  • alibaba
  • AWS
  • Azure
  • cism
  • DevOps
  • Google Cloud Platform
  • programming
Avatar
Cloud Academy Team
— January 31, 2021

Which Certifications Should I Get?

The old AWS slogan, “Cloud is the new normal” is indeed a reality today. Really, cloud has been the new normal for a while now and getting credentials has become an increasingly effective way to quickly showcase your abilities to recruiters and companies. With all that in mind, the s...

Read more
  • AWS
  • Azure
  • Certifications
  • Cloud Computing
  • Google Cloud Platform
Avatar
Andrew Larkin
— January 31, 2021

The 12 AWS Certifications: Which is Right for You and Your Team?

As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing. As the market leader and most ma...

Read more
  • AWS
  • AWS Certifications
Avatar
Stuart Scott
— January 29, 2021

AWS Certified Solutions Architect Associate: A Study Guide

Want to take a really impactful step in your technical career? Explore the AWS Solutions Architect Associate certificate. Its new version (SAA-C02) was released on March 23, 2020. The AWS Solutions Architect - Associate Certification (or Sol Arch Associate for short) offers some ...

Read more
  • AWS
  • AWS Certifications
  • AWS Certified Solutions Architect Associate
Amanda Cross
Amanda Cross
— January 7, 2021

New Content: AWS Terraform, Java Programming Lab Challenges, Azure DP-900 & DP-300 Certification Exam Prep, Plus Plenty More Amazon, Google, Microsoft, and Big Data Courses

This month our Content Team continues building the catalog of courses for everyone learning about AWS, GCP, and Microsoft Azure. In addition, this month’s updates include several Java programming lab challenges and a couple of courses on big data. In total, we released five new learning...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Avatar
Stuart Scott
— December 17, 2020

Where Should You Be Focusing Your AWS Security Efforts?

Another day, another re:Invent session! This time I listened to Stephen Schmidt’s session, “AWS Security: Where we've been, where we're going.” Amongst covering the highlights of AWS security during 2020, a number of newly added AWS features/services were discussed, including: AWS Audit...

Read more
  • AWS
  • AWS re:Invent
  • cloud security
Joe Nemer
Joe Nemer
— December 4, 2020

AWS re:Invent: 2020 Keynote Top Highlights and More

We’ve gotten through the first five days of the special all-virtual 2020 edition of AWS re:Invent. It’s always a really exciting time for practitioners in the field to see what features and services AWS has cooked up for the year ahead.  This year’s conference is a marathon and not a...

Read more
  • AWS
  • AWS Glue Elastic Views
  • AWS re:Invent
Bryony Harrower
Bryony Harrower
— November 6, 2020

WARNING: Great Cloud Content Ahead

At Cloud Academy, content is at the heart of what we do. We work with the world’s leading cloud and operations teams to develop video courses and learning paths that accelerate teams and drive digital transformation. First and foremost, we listen to our customers’ needs and we stay ahea...

Read more
  • AWS
  • Azure
  • content roadmap
  • GCP
Joe Nemer
Joe Nemer
— October 25, 2020

Excelling in AWS, Azure, and Beyond – How Danut Prisacaru Prepares for the Future

Meet Danut Prisacaru. Danut has been a Software Architect for the past 10 years and has been involved in Software Engineering for 30 years. He’s passionate about software and learning, and jokes that coding is basically the only thing he can do well (!). We think his enthusiasm shines t...

Read more
  • AWS
  • careers
  • champions
  • upskilling