Amazon Kinesis: Managed Real-Time Event Processing

As Big Data evolves, more tools and technologies aimed at helping enterprises cope are coming on line. Live data needs special attention, because delayed processing can effect its value: a twitter trend will attract more attention if it is associated with something going on right now; a logging system alert is only useful while the error still exists. To tame huge volumes of time-sensitive streaming data, AWS created Amazon Kinesis.

Amazon Kinesis is a fully managed, real-time, event-driven processing system that offers highly elastic, scalable infrastructure. It is designed to process massive amounts of real-time data generated from social media, logging systems, click streams, IoT devices, and more.

The open source Apache Kafka project actually shares some functionality with Amazon Kinesis. While Kafka is very fast (and free), it is still a bundled tool that needs installation, management, and configuration. If you would prefer to avoid the extra administrative burden and already have some AWS Cloud investment, then Kinesis may just be your new best friend.

Amazon Kinesis Architecture

Amazon Kinesis architecture building blocks(Amazon Kinesis building blocks)

Data Records

Data Records contain the information from a single event. A data record consists of a sequence number, a partition key, and a data blob.

  • Sequence numbers are created and signed by Kinesis. Event consumers process the records according to the order of the sequence number.
  • A partition key is an identifier chosen by event submitters to generate a hash key which will determine to which shard a data record belongs.
  • Data Blobs are actual payload objects containing content like log records, tweets, and RFID records. Data blobs do not have a particular format and can be as large as 50KB.

Streams

Streams are the core building block of the Amazon Kinesis service. Data records are written to streams by event producers and read by event consumers. Streams are composed of one or more shards, while Shards are a logical subset of data within a stream. Events in a stream are stored for 24 hours.

Kinesis is meant for real-time data processing – and in real-time events, a stale record possesses little value. Amazon Kinesis streams are identified by Amazon Resource Names (ARN).

Shards

Shards are the objects to which data records are written and consumed by event producers and event consumers. Each shard gets data records according to hash key ranges. Partition keys are taken by Kinesis from data records, formatted to 128-bit hash keys, and associated with a shard for a certain range.

The Kinesis user is responsible for shard allocation, and the number of shards determines the application throughput. According to AWS Kinesis documentation:

Each open shard can support up to 5 read transactions per second, up to a maximum total of 2 MB of data read per second. Each shard can support up to 1000 write transactions per second, up to a maximum total of 1 MB data written per second.

Shards are elastic in nature. You can increase or decrease the number of shards according to your load.

Kinesis Consumers

Kinesis consumers are typically Kinesis application runs on clusters of EC2 instances. A Kinesis consumer uses the Amazon Kinesis Client library to read data from streams’ shards. Actually, streams push data records to a Kinesis application.

When Kinesis applications are created, they are automatically assigned to a stream, and the stream, in turn, associates the consumers with one or more shards. Consumers perform only lighter tasks on data records before submitting them to AWS DynamoDB, AWS EMR, AWS S3, or even a different Kinesis stream for further processing.

Consider a real-life Kinesis example involving a Twitter application: in a Twitter data analysis application, tweets are data records, all tweets form the stream (i.e. Twitter Firehose). The tweets are segregated by topic so each topic name can be used as a partition key. All the tweets belong to set of Twitter topics that are grouped together to form a shard.

Kinesis Operations

Amazon kinesis supports the Java API only. The following operations are performed using the Kinesis client API:

Add Data Record to Stream

Producers call PutRecord to push data to a stream or to shards. Each record should be less than 50 KB. The user then creates a PutRecordRequest and passes {streamName, partitionKey, data} as input. You can also force a strict ordering of records by calling setSequenceNumberForOrdering and passing an incremental atomic number or sequence number of previous record.

Get Records from Shards

Retrieving records (up to 1 MB) from shards or streams requires a shard iterator. Create a GetRecordRequest object, and call the getRecords method by passing the GetRecordRequest object. Obtain the next shard iterator from getRecordsResult to make next call to getRecordResult.

Resharding Streams

Resharding a stream will split or merge shards to match the dynamic event flow to the Kinesis stream. Always split a shard into two shards or merge two shards into one in a single resharding operation. As AWS Kinesis bills you per shard, merging shards cuts your shard cost by half (while splitting doubles the cost). Resharding is an administrative process that can be triggered by CloudWatch monitoring metrics.

Kinesis Connectors

Amazon Kinesis offers three connector types: S3 Connector, Redshift Connector, and DynamoDB connector.

Kinesis Pricing Model

Amazon Kinesis uses a pay-as-you-go pricing model based on two factors: Shard Hours and PUT Payload Units.

  • Shard Hour. In Kinesis, a shard provides a capacity of 1MB/sec data input and 2MB/sec data output and can support up to 1000 records per second. Users are charged for each shard at an hourly rate. The number of shards depends on their throughput requirements.
  • PUT Payload Unit. PUT Payload Units are billed at a per million PUT Payload Units rate. In Kinesis, a unit of PUT payload is 25KB. So, for example, if your record size is 30KB, you are charged 2 PUT payload units. If your data record is 1 MB, you are charged for 40 PUT payload units.

In the AWS standard region, a shard hour currently costs $0.015. So, for example, let’s say that your producer produces 100 records per second and each data record is 50 KB. This would translate as a 5MB/second input to your Kinesis stream from the producer. As each shard supports 1 MB/sec input, we need 5 shards to process 5000 KB/second (as each shard supports 1000 KB/second). So our shard per hour cost will be $0.075 (0.015*5). 24 hours of processing would therefore cost us $1.80.

Moreover, we need 2 PUT Payload Units for each data record (1 PUT Payload Unit= 25 KB chunk). Again, we’re producing 100 records per second. We are charged 2000 PUT Payload Unit/second. In an hour we are charged 7200000 PUT Payload Unit. Hence we are charged 172800000 PUT Payload Unit per day. The cost will be $2.4192 (172800000/1000000 * 0.014).

So we will be charged a total of (1.8+2.4192) $4.2192 /day for our data processing.

A few Amazon Kinesis use cases:

  • Real-time data processing.
  • Application log processing.
  • Complex Direct Acyclic Graph (DAG) processing.

With the power of real-time data processing through a managed service from AWS, Amazon Kinesis is a perfect tool for storing and analyzing data from social media streams, website clickstreams, financial transactions logs, application or server logs, sensors, and much more.

To gain a better understanding of Amazon Kinesis and get started with building streamed solutions, tale a look at Cloud Academy’s intermediate course on Amazon Kinesis. 

Introduction to Amazon Kinesis library screenshot

Have you used Kinesis yet? Why not share your experience?

Avatar

Written by

Chandan Patra

Cloud Computing and Big Data professional with 10 years of experience in pre-sales, architecture, design, build and troubleshooting with best engineering practices. Specialities: Cloud Computing - AWS, DevOps(Chef), Hadoop Ecosystem, Storm & Kafka, ELK Stack, NoSQL, Java, Spring, Hibernate, Web Service


Related Posts

Avatar
Stuart Scott
— October 16, 2019

AWS Security: Bastion Host, NAT instances and VPC Peering

Effective security requires close control over your data and resources. Bastion hosts, NAT instances, and VPC peering can help you secure your AWS infrastructure. Welcome to part four of my AWS Security overview. In part three, we looked at network security at the subnet level. This ti...

Read more
  • AWS
Avatar
Sudhi Seshachala
— October 9, 2019

Top 13 Amazon Virtual Private Cloud (VPC) Best Practices

Amazon Virtual Private Cloud (VPC) brings a host of advantages to the table, including static private IP addresses, Elastic Network Interfaces, secure bastion host setup, DHCP options, Advanced Network Access Control, predictable internal IP ranges, VPN connectivity, movement of interna...

Read more
  • AWS
  • best practices
  • VPC
Avatar
Stuart Scott
— October 2, 2019

Big Changes to the AWS Certification Exams

With AWS re:Invent 2019 just around the corner, we can expect some early announcements to trickle through with upcoming features and services. However, AWS has just announced some big changes to their certification exams. So what’s changing and what’s new? There is a brand NEW ...

Read more
  • AWS
  • Certifications
Alisha Reyes
Alisha Reyes
— October 1, 2019

New on Cloud Academy: ITIL® 4, Microsoft 365 Tenant, Jenkins, TOGAF® 9.1, and more

At Cloud Academy, we're always striving to make improvements to our training platform. Based on your feedback, we released some new features to help make it easier for you to continue studying. These new features allow you to: Remove content from “Continue Studying” section Disc...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • ITIL® 4
  • Jenkins
  • Microsoft 365 Tenant
  • New content
  • Product Feature
  • Python programming
  • TOGAF® 9.1
Avatar
Stuart Scott
— September 27, 2019

AWS Security Groups: Instance Level Security

Instance security requires that you fully understand AWS security groups, along with patching responsibility, key pairs, and various tenancy options. As a precursor to this post, you should have a thorough understanding of the AWS Shared Responsibility Model before moving onto discussi...

Read more
  • AWS
  • instance security
  • Security
  • security groups
Avatar
Jeremy Cook
— September 17, 2019

Cloud Migration Risks & Benefits

If you’re like most businesses, you already have at least one workload running in the cloud. However, that doesn’t mean that cloud migration is right for everyone. While cloud environments are generally scalable, reliable, and highly available, those won’t be the only considerations dri...

Read more
  • AWS
  • Azure
  • Cloud Migration
Joe Nemer
Joe Nemer
— September 12, 2019

Real-Time Application Monitoring with Amazon Kinesis

Amazon Kinesis is a real-time data streaming service that makes it easy to collect, process, and analyze data so you can get quick insights and react as fast as possible to new information.  With Amazon Kinesis you can ingest real-time data such as application logs, website clickstre...

Read more
  • amazon kinesis
  • AWS
  • Stream Analytics
  • Streaming data
Joe Nemer
Joe Nemer
— September 6, 2019

Google Cloud Functions vs. AWS Lambda: The Fight for Serverless Cloud Domination

Serverless computing: What is it and why is it important? A quick background The general concept of serverless computing was introduced to the market by Amazon Web Services (AWS) around 2014 with the release of AWS Lambda. As we know, cloud computing has made it possible for users to ...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
Joe Nemer
Joe Nemer
— September 3, 2019

Google Vision vs. Amazon Rekognition: A Vendor-Neutral Comparison

Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs. This post is a fact-based comparative analysis on Google Vision vs. Amazon Rekognition and will focus on the tech...

Read more
  • Amazon Rekognition
  • AWS
  • Google Cloud Platform
  • Google Vision
Alisha Reyes
Alisha Reyes
— August 30, 2019

New on Cloud Academy: CISSP, AWS, Azure, & DevOps Labs, Python for Beginners, and more…

As Hurricane Dorian intensifies, it looks like Floridians across the entire state might have to hunker down for another big one. If you've gone through a hurricane, you know that preparing for one is no joke. You'll need a survival kit with plenty of water, flashlights, batteries, and n...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • New content
  • Product Feature
  • Python programming
Joe Nemer
Joe Nemer
— August 27, 2019

Amazon Route 53: Why You Should Consider DNS Migration

What Amazon Route 53 brings to the DNS table Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service offered by AWS. It is named by the TCP or UDP port 53, which is where DNS server requests are addressed. Like any DNS service, Route 53 handles domain regist...

Read more
  • Amazon
  • AWS
  • Cloud Migration
  • DNS
  • Route 53
Alisha Reyes
Alisha Reyes
— August 22, 2019

How to Unlock Complimentary Access to Cloud Academy

Are you looking to get trained or certified on AWS, Azure, Google Cloud Platform, DevOps, Cloud Security, Python, Java, or another technical skill? Then you'll want to mark your calendars for August 23, 2019. Starting Friday at 12:00 a.m. PDT (3:00 a.m. EDT), Cloud Academy is offering c...

Read more
  • AWS
  • Azure
  • cloud academy content
  • complimentary access
  • GCP
  • on the house