Everything You Ever Wanted to Know about Amazon Kinesis Firehose

Amazon Kinesis Firehose makes it easy to load streaming data into AWS. Here’s what you need to know.

In an earlier blog post, I introduced you to Amazon Kinesis, the real-time streaming data service from Amazon. Now we will discuss the equally-important Amazon Kinesis Firehose service and how you can leverage it to easily load streaming data into AWS. We’ll discuss an overview of the service, the key concepts you need to know, and how to get started today. To make your life easier, I’ve included screenshots to help describe the individual steps. Read on to learn more about Amazon Kinesis Firehose!

Introducing Amazon Kinesis Firehose

Amazon Kinesis Firehose is simple to use, and the process of analyzing massive volumes of streaming data requires only five easy steps:

  1. Capture and submit streaming data to Amazon Kinesis Firehose.
  2. Amazon Kinesis Firehose loads streaming data to Amazon S3.
  3. Analyze streaming data using any BI tools of your choice.
  4. ???
  5. Profit!

Amazon Kinesis Firehose is a managed service, so you won’t have to wrestle with any obtuse administrative tasks. It automatically scales according to your data throughput and needs. To minimize storage use and for security purposes, Amazon Kinesis Firehose also batches, compresses, and encrypts data before loading it. The service manages all the underlying infrastructure, storage, networking, and configuration required for capturing and loading data into Amazon S3 or Amazon Redshift. Apart from the elastic nature of it’s underlying infrastructure, Amazon Kinesis Firehose synchronously replicates data across three facilities in an AWS Region, providing high availability and durability for the data as it is transported to various destinations.

The Key Concepts of Amazon Kinesis Firehose You Need to Know

You need to be familiar with a few terms before we know how to use Firehose. Let’s take a closer look at each of them.

Delivery Stream: Users submit the data to Amazon Kinesis Firehose by creating a Delivery Stream.
Records: A Record is the data that user submits to the delivery stream. Record can be up to 1000KB each.
Data Producers: In Amazon Kinesis parlance, producers are the entities those generate streaming data. A web server can be a producer that emits log data, or a twitter stream, which submits data about a particular trend etc. The producer list can go to anything that fits your requirement such as click stream, a log shipper like Logstash and so on.
Buffer Size and Buffer Interval: Amazon Kinesis Firehose buffers incoming streaming data to a certain size or for a certain period of time before delivering to destinations. Buffer Size is in MBs and Buffer Interval is in seconds.  Amazon Kinesis Firehose buffers incoming data before delivering it to your S3 bucket. You can configure buffer size (1 to 128 MBs) or buffer interval (60 to 900 seconds), and the one the condition is satisfied the system triggers the data delivery to your S3 bucket. In circumstances where data delivery to destination is falling behind data writing to delivery stream, Firehose raises buffer size dynamically to catch up and make sure that all data is delivered to the destination.
Amazon Kinesis Agents: Amazon Kinesis Agent is a pre-built Java application for the linux-based servers (only Amazon and RHEL the time writing this blog), that monitors files such as log files and continuously collect and send data to your delivery stream.

Getting Started with Kinesis Firehose (Screenshots included)

Note: To use Kinesis Firehose, you should already have an AWS account.

Login to Amazon and go to the Kinesis Service

Amazon kinesis firehose

Go to Kinesis Firehose to create a Delivery Stream.

Amazon Kinesis Firehose Kinesis-2-674x506 3

The next step is to create a delivery stream as follows. Choose either S3 or Redshift as your destination.

Amazon Kinesis Firehose Screen-Shot-4

You have to provide a stream name, select a pre-existing bucket or you can create a new S3 bucket in a region of your choice, and add a prefix that will be a folder in the aforementioned bucket.

Amazon Kinesis Firehose

The next step shows you how to configure various parameters such as buffer-size, buffer interval, Compression and Encryption, IAM Roles. We have configured this as below:

Amazon Kinesis Firehose

To Create/Update Existing IAM Role, click on Firehose Delivery IAM Role. It will take to a new page. Here we have taken the default policy and role name etc. and allow it.

Kinesis-7You will be taken to previous Configuration page where you click next.

The next page will be review page where you will get a chance to review your configurations.

Amazon Kinesis Firehose

If everything looks OK, “Create Delivery Stream”.

The stream will be created. Users have options to Edit, Delete, Monitor and Send Data to the stream.

Amazon Kinesis Firehose
  • Your stream is ready and you can send data to the stream using either aforementioned Amazon Kinesis Agent.
  • You can either use a java program or aws cli to submit data to Firehose (we are doing it to keep things simple). In real world data comes from various sources with a high velocity and volume.
  • AWS CLI for Generating logs:
aws firehose put-record --delivery-stream-name <stream-name> --record Data="<log string>\n"
  • Sample Java API to generate logs for stream:
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehoseClient;
import com.amazonaws.services.kinesisfirehose.model.PutRecordRequest;
import com.amazonaws.services.kinesisfirehose.model.PutRecordResult;
import com.amazonaws.services.kinesisfirehose.model.Record;
public class KinesisFireHoseSampleApp {
private static AmazonKinesisFirehoseClient firehoseClient;
private static void init() throws Exception {
AWSCredentials credentials = new BasicAWSCredentials("xxxxxxxxxxxxxx", "yyyyyyyyyyyyyyyyyyyy");
firehoseClient = new AmazonKinesisFirehoseClient(credentials);
}
public static void main(String[] args) throws Exception {
init();
String data = "my log data" + "\n"; // add \n as a record separator
Record record = new Record();
record.setData(ByteBuffer.wrap(data.getBytes(StandardCharsets.UTF_8)));
PutRecordRequest putRecordRequest = new PutRecordRequest()
.withDeliveryStreamName("test-stream")
.withRecord(record);
putRecordRequest.setRecord(record);
PutRecordResult result = firehoseClient.putRecord(putRecordRequest);
System.out.println("Result Inserted with ID: "+result.getRecordId());
}
}

You can see the metrics in Cloudwatch dashboard associated with the stream. Here are few samples:

Amazon Kinesis Firehose

The above example is a very basic one and sends through the above java client which sends a log record each time the program is run. Please note that we need aws-java-sdk-1.10.43 and amazon-kinesis-client-1.6.1 in the project library to run the application. After submitting the requests, you can see the graphs plotted against the requested records. However, this is not the ideal scenario to get Kinesis Firehose log graphs. The above example only shows how a record is submitted to Firehose stream – You need a high volume or high velocity data stream to realize the benefits of Kinesis Firehose. One of the best ways to send your logs to Firehose is to install the Amazon Kinesis Agent.

Amazon Kinesis Agent Setup Process 

Amazon Kinesis Agent is a stand-alone Java software application that offers an easier way to collect and ingest data in Firehose which continuously monitors a set of files and sends new data records to your delivery stream. Users can install the agent on Linux-based server environments such as web servers, front-ends, log servers, and database servers, and configure the agent by specifying the log files to monitor and the delivery stream names. Afterwards, the agent continuously collects data from the log files and submits the data to the delivery stream.

  • The supported linux versions are Amazon Linux AMI with version 2015.09 or later, or Red Hat Enterprise Linux version 7 or later.
  • Run following command to install the agent:
    sudo yum install –y aws-kinesis-agent
  • Alternately you can download and install in RHEL:
    sudo yum install –y https://s3.amazonaws.com/streaming-data-agent/aws-kinesis-agent-1.0-1.amzn1.noarch.rpm
  • Start the agent manually:
    sudo service aws-kinesis-agent start
  • Optionally configure the agent to start on system startup:
    sudo chkconfig aws-kinesis-agent on
  • The agent is now running as a system service in the background. It continuously monitors the location specified in the configuration file and emits the data into Firehose, logging agent activity in /var/log/aws-kinesis-agent/aws-kinesis-agent.log.
  • You can configure the agent to use optional settings, which the agent loads from the file /etc/aws-kinesis/agent.json by default.
  • Any changes to the configuration file requires stopping and re-starting the agent, as shown in the following commands:
    sudo service aws-kinesis-agent stop
    sudo service aws-kinesis-agent start

or sudo service aws-kinesis-agent restart

You can learn more about Kinesis Agent from here.

Next Steps

Kinesis is an efficient and must-try feature from AWS that works on both high data volumes and velocities. Gigabytes of data can be processed using Firehose with ease and made very simple to work with and analyze.

To get a solid foundation in Amazon analytics, check out our Analytics Fundamentals for AWS course. You’ll learn everything you need to know about the entire family of AWS analytics services, as well as develop the critical skills you need to bolster your career. Get started today with a free 7-day trial and start learning right away!

Written by

Cloud Computing and Big Data professional with 10 years of experience in pre-sales, architecture, design, build and troubleshooting with best engineering practices.Specialities: Cloud Computing - AWS, DevOps(Chef), Hadoop Ecosystem, Storm & Kafka, ELK Stack, NoSQL, Java, Spring, Hibernate, Web Service

Related Posts

Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless
— July 11, 2018

AWS Certification Practice Exam: What to Expect from Test Questions

If you’re building applications on the AWS cloud or looking to get started in cloud computing, certification is a way to build deep knowledge in key services unique to the AWS platform. AWS currently offers nine certifications that cover the major cloud roles including Solutions Archite...

Read more
  • AWS
— June 26, 2018

Disadvantages of Cloud Computing

If you want to deliver digital services of any kind, you’ll need to compute resources including CPU, memory, storage, and network connectivity. Which resources you choose for your delivery, cloud-based or local, is up to you. But you’ll definitely want to do your homework first.Cloud ...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud