AWS Step Functions
Which services should I use to build a decoupled architecture?
AWS Machine Learning Services
Design a Multi-Tier Solution
When To Go Serverless
AWS Migration Services
The course is part of this learning path
Domain One of The AWS Solution Architect Associate exam guide SAA-C03 requires us to be able to Design a multi-tier architecture solution so that is our topic for this section.
We cover the need to know aspects of how to design Multi-Tier solutions using AWS services.
Want more? Try a lab playground or do a Lab Challenge!
- Learn some of the essential services for creating multi-tier architect on AWS, including the Simple Queue Service (SQS) and the Simple Notification Service (SNS)
- Understand data streaming and how Amazon Kinesis can be used to stream data
- Learn how to design a multi-tier solution on AWS, and the important aspects to take into consideration when doing so
- Learn how to design cost-optimized AWS architectures
- Understand how to leverage AWS services to migrate applications and databases to the AWS Cloud
Let's begin by considering the idea of real-time processing. Real time describes a performance level in computing where the data transfers and processing must complete in a very short time for the results to be meaningful. The stock market, for example, operates in real time when it comes to publishing the prices for stock in order for traders to take actions accordingly. The wide variety of existing devices in the space of the Internet of Things consistently send data to be ingested and processed in order for actionable results to be possible. Think of a home surveillance system sending an alert that a motion detector was activated inside the house while you were on vacation. It will be meaningless for this alert to be useful if you or the responding agency does not get the alert as close to immediately as possible. These types of real time functionality have become more popular and necessary as compute systems and solutions continue to evolve. Streaming applications are a common use case where the speed of data transfers have a direct impact on the customer experience.
It's hard to imagine keeping a customer's attention if the music or video being streamed does not play smoothly. Both the velocity and volume of data transfer become critical factors in real-time data processing applications. Traditional messaging as implemented by queues, topics, or a combination of them can rarely implement data transfers in real time, much less at high volumes of data transfers bigger than 256 KB. For this type of application, we need a new and a different kind of messaging system that allows for large data sets to be ingested and stored; that is the purpose of Amazon Kinesis Data Streams. Kinesis Data Streams represents a real time data collection messaging service that maintains a copy of all the data received in the order received for 24 hours by default and up to 365 days if configured accordingly, by using the increase stream retention period and the decrease stream retention period operations. The retention period is specified in hours, and you can find out the current retention period by using the describe stream operation. Kinesis Data Streams enables the processing of large data sets in real time and the ability to read and replay records to multiple consumer applications.
A stream is made of one or more shards, and each shard stores data records in sequence. Data records are built with a partition key, a sequence number, and the actual data up to one megabyte. Producers put data to Kinesis Data Streams, and consumers process the data by using the Kinesis Client Library. Consumers can be applications running on EC2 instance fleet. And in this case the Kinesis Client Library is compiled into your application, and enables a decoupled, full tolerant, and load balanced consumption of records from the Kinesis Data Stream. The Kinesis Client Library is different from the Kinesis Data Streams API available in the AWS SDK. The Kinesis Client Library ensures that for every shard there is a record processor running and processing the shard. This simplifies the reading of data from a stream by decoupling your record processing logic from the connection and reading of the data stream. It is worth noting that the Kinesis Client Library uses a DynamoDB table to store or control data, and it creates one table per application that is processing data from a stream. The library can be run on EC2 instances, Elastic Beanstalk, and even your own data center if needed.
Kinesis Data Stream can automatically encrypt data as a producer delivers it onto a stream. It uses AWS KMS master keys for encryption. This is applicable to collect and gather data in real time, like application log data, social media data, real time gain dashboards and leaderboards, stock market data feeds, click sequences data from websites, and many of the applications for processing data for the any other things. The time for a record to be put into a stream and be available for pickup is called the put-to-get latency, and in Kinesis Data Stream it's less than one second. Meaning a Kinesis Data Stream application can start consuming data from the stream almost immediately after data starts arriving. Kinesis Data Streams are made of one or more shards, for writing and ingesting data from producers. A shard can handle 1,000 records per second. The data capacity of a stream is proportional to the number of shards being used. And with enough shards, you can collect gigabytes of data per second from tens of thousands of sources. A shard is a sequence of records in a stream. A stream represents one or more shards, each of which has a fixed capacity. Each shard can be written at a rate of 1,000 records per second up to a maximum rate of one megabyte per second. For reading, a shard can sustain the rate of up to a maximum of two megabytes per second. Increasing the data rate for a stream is a matter of increasing the number of shards in it.
Inside a shard, data is written as records, and each record can be up to one megabyte. The anatomy of a record consists of three parts. The first is a partition key, that is used to group data in a shard in a stream. The partition key defines the shard the record belongs to. The second is a sequence number; that is unique per partition key within a shard. Sequence number for the same partition key help maintain the order of arrival for records. Sequence numbers increase over time for the same partition key. The third is the actual data for the record up to one megabyte. It's important to note that the actual data in a record is not inspected, interpreted, or changed in any way by the Kinesis Data Streams service, it is up to the consumer application to perform any of these operations if required. The capacity of a shard in Kinesis Data Stream can be configured as on-demand capacity or provision capacity mode for your stream. Using on-demand capacity, Kinesis Data Streams automatically manages and accommodate the number of shards in order to provide the throughput needed by your workload.
Throughput is adjusted up and down according to the needs of your application; you are built for the throughput you actually use. In provision capacity mode, you need to specify the number of shards for the data stream. You can increase or decrease the number of shards in a data stream as needed, and you are built for the number of shards at an hourly rate. The operations for resharding includes; splitting shards and merging shards to increase and decrease the number of shards as needed in a stream. Keep in mind that the total capacity of the stream is the sum of the capacity of all the shards being used. In short, Kinesis Data Streams allows for real time processing of streaming big data and the ability to replay records to multiple Amazon Kinesis applications. The Amazon Kinesis Client Library delivers all records for a given partition key to the same record processor, making it easier to build multiple applications that read from the same Amazon Kinesis stream for the purpose of counting, aggregating, and filtering.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.