In an earlier blog post, I introduced you to Amazon Kinesis, the real-time streaming data service from Amazon. Now we will discuss the equally-important Amazon Kinesis Firehose service and how you can leverage it to easily load streaming data into AWS. We’ll discuss an overview of the service, the key concepts you need to know, and how to get started today. To make your life easier, I’ve included screenshots to help describe the individual steps. Read on to learn more about Amazon Kinesis Firehose!
Amazon Kinesis Firehose is simple to use, and the process of analyzing massive volumes of streaming data requires only five easy steps:
Amazon Kinesis Firehose is a managed service, so you won’t have to wrestle with any obtuse administrative tasks. It automatically scales according to your data throughput and needs. To minimize storage use and for security purposes, Amazon Kinesis Firehose also batches, compresses, and encrypts data before loading it. The service manages all the underlying infrastructure, storage, networking, and configuration required for capturing and loading data into Amazon S3 or Amazon Redshift. Apart from the elastic nature of it’s underlying infrastructure, Amazon Kinesis Firehose synchronously replicates data across three facilities in an AWS Region, providing high availability and durability for the data as it is transported to various destinations.
You need to be familiar with a few terms before we know how to use Firehose. Let’s take a closer look at each of them.
Delivery Stream: Users submit the data to Amazon Kinesis Firehose by creating a Delivery Stream.
Records: A Record is the data that user submits to the delivery stream. Record can be up to 1000KB each.
Data Producers: In Amazon Kinesis parlance, producers are the entities those generate streaming data. A web server can be a producer that emits log data, or a twitter stream, which submits data about a particular trend etc. The producer list can go to anything that fits your requirement such as click stream, a log shipper like Logstash and so on.
Buffer Size and Buffer Interval: Amazon Kinesis Firehose buffers incoming streaming data to a certain size or for a certain period of time before delivering to destinations. Buffer Size is in MBs and Buffer Interval is in seconds. Amazon Kinesis Firehose buffers incoming data before delivering it to your S3 bucket. You can configure buffer size (1 to 128 MBs) or buffer interval (60 to 900 seconds), and the one the condition is satisfied the system triggers the data delivery to your S3 bucket. In circumstances where data delivery to destination is falling behind data writing to delivery stream, Firehose raises buffer size dynamically to catch up and make sure that all data is delivered to the destination.
Amazon Kinesis Agents: Amazon Kinesis Agent is a pre-built Java application for the linux-based servers (only Amazon and RHEL the time writing this blog), that monitors files such as log files and continuously collect and send data to your delivery stream.
Note: To use Kinesis Firehose, you should already have an AWS account.
aws firehose put-record --delivery-stream-name <stream-name> --record Data="<log string>\n"
import java.nio.ByteBuffer; import java.nio.charset.StandardCharsets; import com.amazonaws.auth.AWSCredentials; import com.amazonaws.auth.BasicAWSCredentials; import com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehoseClient; import com.amazonaws.services.kinesisfirehose.model.PutRecordRequest; import com.amazonaws.services.kinesisfirehose.model.PutRecordResult; import com.amazonaws.services.kinesisfirehose.model.Record; public class KinesisFireHoseSampleApp { private static AmazonKinesisFirehoseClient firehoseClient; private static void init() throws Exception { AWSCredentials credentials = new BasicAWSCredentials("xxxxxxxxxxxxxx", "yyyyyyyyyyyyyyyyyyyy"); firehoseClient = new AmazonKinesisFirehoseClient(credentials); } public static void main(String[] args) throws Exception { init(); String data = "my log data" + "\n"; // add \n as a record separator Record record = new Record(); record.setData(ByteBuffer.wrap(data.getBytes(StandardCharsets.UTF_8))); PutRecordRequest putRecordRequest = new PutRecordRequest() .withDeliveryStreamName("test-stream") .withRecord(record); putRecordRequest.setRecord(record); PutRecordResult result = firehoseClient.putRecord(putRecordRequest); System.out.println("Result Inserted with ID: "+result.getRecordId()); } }
You can see the metrics in Cloudwatch dashboard associated with the stream. Here are few samples:
The above example is a very basic one and sends through the above java client which sends a log record each time the program is run. Please note that we need aws-java-sdk-1.10.43 and amazon-kinesis-client-1.6.1 in the project library to run the application. After submitting the requests, you can see the graphs plotted against the requested records. However, this is not the ideal scenario to get Kinesis Firehose log graphs. The above example only shows how a record is submitted to Firehose stream – You need a high volume or high velocity data stream to realize the benefits of Kinesis Firehose. One of the best ways to send your logs to Firehose is to install the Amazon Kinesis Agent.
Amazon Kinesis Agent is a stand-alone Java software application that offers an easier way to collect and ingest data in Firehose which continuously monitors a set of files and sends new data records to your delivery stream. Users can install the agent on Linux-based server environments such as web servers, front-ends, log servers, and database servers, and configure the agent by specifying the log files to monitor and the delivery stream names. Afterwards, the agent continuously collects data from the log files and submits the data to the delivery stream.
sudo yum install –y aws-kinesis-agent
sudo yum install –y https://s3.amazonaws.com/streaming-data-agent/aws-kinesis-agent-1.0-1.amzn1.noarch.rpm
sudo service aws-kinesis-agent start
sudo chkconfig aws-kinesis-agent on
sudo service aws-kinesis-agent stop sudo service aws-kinesis-agent start
or sudo service aws-kinesis-agent restart
You can learn more about Kinesis Agent from here.
Kinesis is an efficient and must-try feature from AWS that works on both high data volumes and velocities. Gigabytes of data can be processed using Firehose with ease and made very simple to work with and analyze.
To get a solid foundation in Amazon analytics, check out our Analytics Fundamentals for AWS course. You’ll learn everything you need to know about the entire family of AWS analytics services, as well as develop the critical skills you need to bolster your career. Get started today with a free 7-day trial and start learning right away!
It's Flash Sale time! Get 50% off your first year with Cloud Academy: all access to AWS, Azure, and Cloud…
In this blog post, we're going to answer some questions you might have about the new AWS Certified Data Engineer…
This is my 3rd and final post of this series ‘Navigating the Vocabulary of Gen AI’. If you would like…