Domain One of The AWS Solution Architect Associate exam guide SAA-C03 requires us to be able to Design a multi-tier architecture solution so that is our topic for this section.
We cover the need to know aspects of how to design Multi-Tier solutions using AWS services.
Want more? Try a lab playground or do a Lab Challenge!
Learning Objectives
- Learn some of the essential services for creating multi-tier architect on AWS, including the Simple Queue Service (SQS) and the Simple Notification Service (SNS)
- Understand data streaming and how Amazon Kinesis can be used to stream data
- Learn how to design a multi-tier solution on AWS, and the important aspects to take into consideration when doing so
- Learn how to design cost-optimized AWS architectures
- Understand how to leverage AWS services to migrate applications and databases to the AWS Cloud
Hello, Andy here to help you get ready for acing questions on design and architecture. Most of the questions you are going to face around architecture are around choosing the right service or combination of services to meet the set of requirements that you're given. And you're gonna find your new knowledge on compute, storage, networking and security, all plays a big part of making the right decision.
Let's talk design patterns. The benefit of multi-layer architecture is the ability to decouple your layers so they can be independently scaled to meet demand. Thereby making the system more resilient and more highly available. So if you're asked to design a high-performance service that's gonna be recording a lot of events or transactions as fast as possible. Then you want to consider implementing a multi-tier design, right? So if it's burst unpredictable traffic where you will also need to be able to say look up transactions or events using an ID. Then most likely you should consider using DynamoDB with global secondary indexes. That's gonna give you the best performance, especially if you consider adding auto-scaling on the tables and global secondary indexes. That's likely to give you the best response for that type of data need.
Now, if you're designing a high-performance system say for machine learning or for data crunching and you need to access files internally then think about Amazon FSx for Lustre. It's likely to provide the best performance for an internal high-performance file share. You can't share files from EBS. So Lustre is a really good fast solution for making large volumes available to more than one instance. And remember FSx for Lustre only runs on Linux. So if you're designing a solution for a Windows environment you need to use Amazon FSx for Windows file server.
Now Amazon FSx for Windows file server can be a really easy way to keep the same user permissions, say if you're using Active Directory or accessing files on a Microsoft Windows platform. If you need your system to communicate on a specific port or in a specific way, say UDP, for example so then think Network Load Balancer. Network Load Balancer works on level 4. Remember the Application Load Balancer has more features works on level 7. So if your solution needs to be highly available and cost efficient then consider the Network Load Balancer for that need.
Put the Network Load Balancer in front of EC2 instances in multi AZs. And remember to set up your auto scaling group to add or remove instances automatically, 'cause that's going to keep your costs down. Now, a few common re-architecting scenarios. Credentials stored in code a common problem. So if you're doing a code review and you find database or instance credentials in the code, you need to get them out and to put them into something more secure. So the best solution for that is the AWS Secrets Manager. You'll need to use perhaps a Lambda function or a similar process to retrieve the credentials from AWS Secrets Manager. So if you do it that way remember you'll need, that function will need to run using an IAM role. So it's running securely.
Now, Secrets Manager best fit AWS KMS. Wouldn't be such a good option for credentials. It's a service designed for storing encryption keys and you wouldn't consider cloud HSM 'cause that's an a hardware appliance. So it actually attracts quite a high cost. It's well suited to your hybrid environments where you need to store keys across on prem and in the cloud. So that would be an overkill for managing secrets.
If you need to improve performance in delivering an application or content globally, say your services growing and you reaching new markets then you really need to consider Amazon CloudFront. That's likely to be a best option for delivering to a wider audience and improving performance. Certainly if you need to geoblock or restrict access to some content, then think CloudFront. If you need to share content to a small group of people say another team or an office then you can use CloudFront for that. So using CloudFront signed URL so the one-time token is a really good way to provide access to resources in a managed way. Certainly better than trying to set up IAM policies on S3 buckets, for example. But you can also block bad actors or IP addresses using CloudFront.
You're better off using AWS WAF to manage that type of granular access control. But if you don't have WAF or CloudFront as options to restrict access then you can block access to resources from a specific CIDR range using Network Access Control Lists. That can block a range of IP addresses in the same CIDR range. So it gives you some control, not as much as you would get from WAF or from CloudFront. If you use a NACLs too as a denied method then you need to add a deny rule to the inbound table of your Network Access Control List. And ideally give it a lower number than any other rules that you might have in there.
As architects, we often get asked to meet compliance or auditing requirements. So remember that AWS CloudTrail is usually the best way to record API calls and AWS Config is often the best way to track configuration changes inside your VPC into your account. As architects, we often get asked to increase or design for a performance, decoupling architectures is the best way to start with that. SQS and SNS are perfect for decoupling layers in multi-tiered architectures.
If you need to process tasks like accepting a customer order and perhaps sending a confirmation email, then using SQS and SNS is a very good way to manage that. If you need to support parallel asynchronous processing, so for example, you need to have that process and that email sent at the very same time, then using the SNS Fanout method is a good way of doing that. That's when you have steps that you want to have process simultaneously, I.e at the same time in parallel. Fanout means having the message published to an SNS topic which is then replicated and pushed to multiple Amazon SQS queues. But it could be any end point. It doesn't have to be SQS. It could be a Lambda function or a Kinesis Data Firehouse, any HTTPS end point basically.
So if you need to design a customer ordering application to publish a message to an SNS topic, whenever an order is placed for a product, then any SQS queues that are subscribed to that SNS this topic will receive identical notifications for their order. That's perfect for that scenario. So you could have an EC2 server instance attached to one of the SQS queues, which will handle actual processing of the order. And you can also have an SQS queue to handle notifying the customer that the order has commenced. You could also attach another EC2 server instance to analyzing the order for patterns and behaviors and have activities based on that. So that's a really scalable way of handling parallel asynchronous processing, even with decoupled multi-tiered web applications, if you're using all the best practices like DynamoDB, SQS and EC2 to decouple layers, then there's always small improvements that can be made.
So it can be common when you're decoupling to see slight delays in processing, which may not be optimal for processing something like our customer web orders. One option to improve performance immediately is to use EC2 auto scaling to scale out the middle tier instances based on the SQS queue depth. That can improve performance immediately.
Now, if you are experiencing a very high volume of incoming messages, let's say 100,000 a second or more and there are multiple microservices consuming the messages then it could be an option to implement Kinesis Data Streams using a single shard. That may be more performant to have the microservices read and process messages from the Kinesis Stream. If your system is suffering performance degradation due to heavy read traffic or say people running reports, then migrating the database to RDS is always an option. It's always gonna give you an a performance increase and especially if you're experiencing a lot of read only SQL queries, then adding a read replica can improve performance with minimal changes to your existing applications.
If you've implemented a NoSQL database like DynamoDB then read performance can be increased by adding auto scaling to the table and adding global indexes. You can also consider as another layer adding Elasticache as a cache. That can also improve read performance. Okay, not write just read performance. And if you need features in that cache, then think Redis. If you just need speed, think MemcacheD. All right, so those are a few design patterns to remember and keep in mind.
We're getting better and better prepared, we're getting closer to passing this exam, let's move on.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.