This course provides an introduction to Amazon Kinesis. It includes concepts of data streaming, related business problems, and streaming data into the AWS cloud.
Amazon Kinesis is a collection of four services and related features: Kinesis Data Streams, Kinesis Data Firehose, Kinesis Video Streams, and Kinesis Data Analytics. This course provides a high-level overview of all of them.
Learning Objectives
When finished with this course, you will have a solid understanding of Amazon Kinesis, have use cases for each of its components, and an awareness of the types of business problems each component addresses.
Intended Audience
This course is intended for people that want to learn about Amazon Kinesis, what it does, and why it's important.
Prerequisites
No prior knowledge of Amazon Kinesis or streaming data is required for this course but you should have some basic experience with the AWS platform.
Amazon Kinesis was designed to address the complexity and costs of streaming data into the AWS cloud.
Kinesis makes it easy to collect, process, and analyze various types of data streams such as event logs, social media feeds, clickstream data, application data, and IoT sensor data in real time or near real-time.
Access to Kinesis is controlled using AWS Identity and Access Management, IAM.
Using the AWS Key Management Service, KMS, data is automatically protected in the stream as well as when it is placed inside an AWS storage service such as Amazon S3 or Redshift.
Data in transit is protected using TLS, the Transport Layer Security Protocol.
Amazon Kinesis is composed of four services, Kinesis Video Streams, Kinesis Data Streams, Kinesis Data Firehose, and Kinesis Data Analytics.
Kinesis Video Streams is used to do stream processing on binary-encoded data, such as audio and video.
Kinesis Data Streams, Kinesis Data Firehose, and Kinesis Data Analytics are used to stream base64 text-encoded data. This text-based information includes sources such as logs, click-stream data, social media feeds, financial transactions, in-game player activity, geospatial services, and telemetry from IoT devices.
Please note that, at the time this content was written, the course information was accurate. AWS implements hundreds of updates every month as part of its ongoing drive to innovate and enhance its services.
As a result, minor discrepancies may appear in the course content over time. Here at Cloud Academy, we strive to keep our courses up to date in order to provide the best training available.
If you notice any information that is outdated, please contact us at support@cloudacademy.com.
This will allow us to update the course during the next release cycle. We would love to hear from you. I would love to hear from you.
Tell me what you learned, what could use some attention, or what worked really well for you. If you discover a new and better way to do something, I'd love to learn about it. Especially if it makes life easier. I'm not lazy, I'm efficient. They look the same but we both know they're different. Right?
Generally speaking, streaming data frameworks are described as having five layers; the Source, Stream Ingestion, Stream Storage, Stream Processing, and the Destination.
Using Kinesis Data Streams the process looks like this.
Data is generated by one or more sources including mobile devices, meters in smart homes, click streams, IoT sensors, or logs.
At the Stream Ingestion Layer data is collected by one or more Producers, formatted as Data Records, and put into a stream.
The Kinesis Data Stream is a Stream Storage Layer and is a high-speed buffer that stores data for between a minimum of 24 hours and, as of November 2020, 365 days. 24 hours is the default.
Inside Kinesis Data Streams, the Data Records are immutable. Once stored, they cannot be modified. Updates to data require a new record to be put into the stream. Data is also not removed from the stream, it can only expire.
The Stream Processing Layer is managed by Consumers. Consumers are also known as Amazon Kinesis Data Streams Applications and process data contained inside a stream.
Consumers send Data Records to the Destination Layer. This can be something like a Data Lake, a Data Warehouse, durable storage, or even another stream.
I'd like to take a few minutes to describe each of the four Amazon Kinesis Streaming services.
Amazon Kinesis Video Streams is designed to stream binary-encoded data into AWS from millions of sources. Traditionally this is audio and video data but it can be any type of binary-encoded time-series data. It's got video in its name because it is the primary use case.
The AWS SDKs make it possible to securely stream data to AWS for playback, storage, analytics, machine learning and other processing.
Data can be ingested from devices such as smartphones, security cameras, edge devices, RADAR, LIDAR, drones, satellites, and dash cams.
Kinesis Video Streams supports the open-source project WebRTC. This allows for two-way, real-time media streaming between web browsers, mobile applications, and connected devices.
Amazon Kinesis Data Streams is a highly customizable streaming solution available from AWS.
Highly customizable means that all parts involved with stream processing--data ingestion, monitoring, scaling, elasticity, and consumption--are done programmatically when creating a stream. AWS will provision resources only when requested.
One important takeaway here is that Kinesis Data Streams does not have the ability to do Auto Scaling. If you need to scale your streams, it is on you to build it into your solution.
To facilitate the development, management, and usage of Kinesis Data Streams, AWS provides APIs, the AWS SDKs, the AWS CLI, the Kinesis Agent for Linux, and the Kinesis Agent for Windows.
Producers put Data Records into a Data Stream.
Kinesis Producers can be created using the AWS SDKs, the Kinesis Agent, the Kinesis APIs, or the Kinesis Producer Library, KPL.
Originally, the Kinesis Agent was only for Linux. However, AWS has released the Kinesis Agent for Windows.
A Kinesis Data Stream is a set of Shards. A shard contains a sequence of Data Records. Data Records are composed of a Sequence Number, a Partition Key, and a Data Blob, and they are stored as an immutable sequence of bytes.
In Amazon Kinesis, Kinesis Data Streams is a stream storage layer.
Data Records in a Kinesis Data Stream are immutable--they cannot be updated or deleted--and are available in the stream for a finite amount of time ranging between 24 hours and, as of November 2020, 8,760 hours. This translates to 365 days.
Originally, the default expiration was 24 hours and this default could be extended up to 7 days for an additional charge.
Data Records stored beyond 24 hours and up to 7 days is billed at an additional rate for each shard hour. With the November 2020 update, now after 7 days, data is billed per gigabyte per month.
The retention period is configured when creating a stream and can be updated using the IncreaseStreamRetentionPeriod() and DecreaseStreamRetentionPeriod() API calls.
There is also a charge for retrieving data older than 7 days from a Kinesis Data Stream using the GetRecords() API call.
There is no charge for long-term data retrieval when using the Enhanced Fanout Consumer using the SubscribeToShard() API.
Consumers--Amazon Kinesis Data Streams Applications--get records from Kinesis Data Streams and process them. Custom applications can be created using the AWS SDKs, the Kinesis API or, the KCL, the Kinesis Client Library.
There are two types of consumers that can get data from a Kinesis Data Stream. The Classic Consumer will Pull data from the Stream. I have seen this also described as a Polling mechanism.
There's a limit to the number of times and the amount of data consumers can pull out of a shard every second. Adding a consumer application to a shard results in having to divide the available throughput between them.
As of version 2 of the KCL, there's now a Push method called Enhanced Fan Out. With Enhanced Fan Out, consumers can subscribe to a shard. This results in data being pushed automatically from the shard into a consumer application. Because consumers are not polling the shard for data, the shared limits are removed and every consumer gets 2 megabytes per second of provisioned throughput per shard.
It's time to talk a little about Firehose.
Amazon Kinesis Data Firehose is a data streaming service from AWS like Kinesis Data Streams. However, while Kinesis Data Streams is highly-customizable, Data Firehose, being fully managed, is really a streaming delivery service for data.
Ingested data can be dynamically transformed, scaled automatically, and is automatically delivered to a data store.
In my courseware, I prefer to state what things are instead of saying what they're not. To me, it's important to think of what is possible with the tool (or tools) I'm using.
However, sometimes, I need to explain what something isn't. In this case, Kinesis Data Firehose is not a streaming storage layer in the way that Kinesis Data Streams is.
Kinesis Data Firehose uses Producers to load data into streams in batches and, once inside the stream, the data is delivered to a data store. There is no need to develop Consumer applications and have custom code process data in the Data Firehose stream.
Unlike Kinesis Data Streams, Amazon Kinesis Data Firehose buffers incoming streaming data before delivering it to its destination. The buffer size and buffer interval is chosen when creating a delivery stream.
The buffer size is in megabytes and has different ranges depending on the destination. The buffer interval can range from 60 seconds to 900 seconds.
Essentially, data buffers inside the stream and will leave the buffer when it is either full or when the buffer interval expires. For this reason, Kinesis Data Firehose is considered a near real-time streaming solution.
Originally, Kinesis Data Firehose could deliver data to four data stores; Amazon S3, Amazon Redshift, Amazon Elasticsearch, or Splunk.
In 2020, this was expanded to include generic HTTP endpoints as well as HTTP endpoints for the 3rd-party providers Datadog, MongoDB Cloud, and New Relic.
Another difference between Kinesis Data Streams and Kinesis Data Firehose is that Kinesis Data Firehose will automatically scale as needed.
Kinesis Data Firehose can convert the format of your input data from JSON to Apache Parquet or Apache ORC before storing the data in Amazon S3.
Parquet and ORC are columnar data formats that save space and enable faster queries compared to row-oriented formats like JSON.
Kinesis Data Firehose can also invoke Lambda functions to transform incoming source-data and deliver the transformed data to its destination.
For example, if data is a format other than JSON--such as comma-separated values--use AWS Lambda to transform it to JSON first.
There is no free tier for using Kinesis Data Firehose. However, costs are only incurred when data is inside a Firehose stream. There is no bill for provisioned capacity, only used capacity.
The fourth, and final, primary service inside Amazon Kinesis is Data Analytics.
Kinesis Data Analytics has the ability to read from the stream in real time and do aggregation and analysis on data while it is in motion.
It does this by leveraging SQL queries or with Apache Flink using Java or Scala to perform time-series analytics, feed real-time dashboards, and create real-time metrics.
When using Kinesis Data Firehose with Kinesis Data Analytics, data records can only be queried using SQL.
Apache Flink with Java and Scala apps are only available for Kinesis Data Streams.
Kinesis Data Analytics has built-in templates and operators for common processing functions to organize, transform, aggregate, and analyze data at scale.
Use cases include ETL, the generation of continuous metrics, and doing responsive real-time analytics.
If you're new to ETL, it stands for Extract, Transform, Load One of the primary purposes of ETL is to enrich, organize, and transform data to match the schema of a Data Lake or a Data Warehouse.
Continuous metric generation applications monitor and report how data is trending over time.
Real-time analytics applications trigger alarms or send notifications when certain metrics reach predefined thresholds, or--in more advanced cases--when an application detects anomalies using machine learning algorithms.
I'm going to conclude this overview with some pricing considerations.
Please be aware that there is no free tier with Amazon Kinesis.
Kinesis Video Streams pricing is based on the volume of data ingested, the volume of data consumed, and data stored across all the video streams in an account.
Kinesis Data Streams pricing is a little more complicated. There is an hourly cost based on the number of shards in a Kinesis Data Stream.
This charge is incurred whether or not data is actually in the stream. There is a separate charge when producers put data into the stream.
When the optional extended data retention is enabled, there's an hourly charge per shard for data stored in a stream.
For consumers, charges are dependent on whether or not Enhanced Fan Out is being used. If it is, charges are based on the amount of data and the number of consumers.
Firehose charges are based on the amount of data put into a delivery stream, for the amount of data converted by Data Firehose, and, if data is sent to a VPC, the amount of data delivered as well as an hourly charge per Availability Zone.
Amazon Kinesis Data Analytics changes an hourly rate based on the number of Amazon Kinesis Processing Units--or KPUs--used to run a streaming application.
A KPU is a unit of stream processing capacity. It consists of 1 virtual CPU and 4 gigabytes of memory.
This brings me to the end of this lecture. Thank you for watching and letting me be part of your cloud journey. If you have any feedback, positive or negative, please contact us at support@cloudacademy.com. Your input on our content is greatly appreciated. I’m Stephen Cole for Cloud Academy. Thank you for watching.
Stephen is the AWS Certification Specialist at Cloud Academy. His content focuses heavily on topics related to certification on Amazon Web Services technologies. He loves teaching and believes that there are no shortcuts to certification but it is possible to find the right path and course of study.
Stephen has worked in IT for over 25 years in roles ranging from tech support to systems engineering. At one point, he taught computer network technology at a community college in Washington state.
Before coming to Cloud Academy, Stephen worked as a trainer and curriculum developer at AWS and brings a wealth of knowledge and experience in cloud technologies.
In his spare time, Stephen enjoys reading, sudoku, gaming, and modern square dancing.