EXAM PREP - Architecture
Start course
Difficulty
Beginner
Duration
1h 53m
Students
4731
Ratings
4.6/5
starstarstarstarstar-half
Description

Please note that this course has been replaced with a new version that can be found here: https://cloudacademy.com/course/architecture-saa-c03/architecture-saa-c03-introduction/ 

 

Domain One of The AWS Solution Architect Associate exam guide SAA-CO2 requires us to be able to Design a multi-tier architecture solution so that is our topic for this section.
We cover the need to know aspects of how to design Multi-Tier solutions using AWS services. 

Want more? Try a lab playground or do a Lab Challenge!

Learning Objectives

  • Learn some of the essential services for creating multi-tier architect on AWS, including the Simple Queue Service (SQS) and the Simple Notification Service (SNS)
  • Understand data streaming and how Amazon Kinesis can be used to stream data
  • Learn how to design a multi-tier solution on AWS, and the important aspects to take into consideration when doing so
 
Transcript

Hello, Andy here to help you get ready for acing questions on design and architecture. Most of the questions you are going to face around architecture are around choosing the right service or combination of services to meet the set of requirements that you're given. And you're gonna find your new knowledge on compute, storage, networking and security, all plays a big part of making the right decision.

Let's talk design patterns. The benefit of multi-layer architecture is the ability to decouple your layers so they can be independently scaled to meet demand. Thereby making the system more resilient and more highly available. So if you're asked to design a high-performance service that's gonna be recording a lot of events or transactions as fast as possible. Then you want to consider implementing a multi-tier design, right? So if it's burst unpredictable traffic where you will also need to be able to say look up transactions or events using an ID. Then most likely you should consider using DynamoDB with global secondary indexes. That's gonna give you the best performance, especially if you consider adding auto-scaling on the tables and global secondary indexes. That's likely to give you the best response for that type of data need.

Now, if you're designing a high-performance system say for machine learning or for data crunching and you need to access files internally then think about Amazon FSx for Lustre. It's likely to provide the best performance for an internal high-performance file share. You can't share files from EBS. So Lustre is a really good fast solution for making large volumes available to more than one instance. And remember FSx for Lustre only runs on Linux. So if you're designing a solution for a Windows environment you need to use Amazon FSx for Windows file server.

Now Amazon FSx for Windows file server can be a really easy way to keep the same user permissions, say if you're using Active Directory or accessing files on a Microsoft Windows platform. If you need your system to communicate on a specific port or in a specific way, say UDP, for example so then think Network Load Balancer. Network Load Balancer works on level 4. Remember the Application Load Balancer has more features works on level 7. So if your solution needs to be highly available and cost efficient then consider the Network Load Balancer for that need.

Put the Network Load Balancer in front of EC2 instances in multi AZs. And remember to set up your auto scaling group to add or remove instances automatically, 'cause that's going to keep your costs down. Now, a few common re-architecting scenarios. Credentials stored in code a common problem. So if you're doing a code review and you find database or instance credentials in the code, you need to get them out and to put them into something more secure. So the best solution for that is the AWS Secrets Manager. You'll need to use perhaps a Lambda function or a similar process to retrieve the credentials from AWS Secrets Manager. So if you do it that way remember you'll need, that function will need to run using an IAM role. So it's running securely.

Now, Secrets Manager best fit AWS KMS. Wouldn't be such a good option for credentials. It's a service designed for storing encryption keys and you wouldn't consider cloud HSM 'cause that's an a hardware appliance. So it actually attracts quite a high cost. It's well suited to your hybrid environments where you need to store keys across on prem and in the cloud. So that would be an overkill for managing secrets.

If you need to improve performance in delivering an application or content globally, say your services growing and you reaching new markets then you really need to consider Amazon CloudFront. That's likely to be a best option for delivering to a wider audience and improving performance. Certainly if you need to geoblock or restrict access to some content, then think CloudFront. If you need to share content to a small group of people say another team or an office then you can use CloudFront for that. So using CloudFront signed URL so the one-time token is a really good way to provide access to resources in a managed way. Certainly better than trying to set up IAM policies on S3 buckets, for example. But you can also block bad actors or IP addresses using CloudFront.

You're better off using AWS WAF to manage that type of granular access control. But if you don't have WAF or CloudFront as options to restrict access then you can block access to resources from a specific CIDR range using Network Access Control Lists. That can block a range of IP addresses in the same CIDR range. So it gives you some control, not as much as you would get from WAF or from CloudFront. If you use a NACLs too as a denied method then you need to add a deny rule to the inbound table of your Network Access Control List. And ideally give it a lower number than any other rules that you might have in there.

As architects, we often get asked to meet compliance or auditing requirements. So remember that AWS CloudTrail is usually the best way to record API calls and AWS Config is often the best way to track configuration changes inside your VPC into your account. As architects, we often get asked to increase or design for a performance, decoupling architectures is the best way to start with that. SQS and SNS are perfect for decoupling layers in multi-tiered architectures.

If you need to process tasks like accepting a customer order and perhaps sending a confirmation email, then using SQS and SNS is a very good way to manage that. If you need to support parallel asynchronous processing, so for example, you need to have that process and that email sent at the very same time, then using the SNS Fanout method is a good way of doing that. That's when you have steps that you want to have process simultaneously, I.e at the same time in parallel. Fanout means having the message published to an SNS topic which is then replicated and pushed to multiple Amazon SQS queues. But it could be any end point. It doesn't have to be SQS. It could be a Lambda function or a Kinesis Data Firehouse, any HTTPS end point basically.

So if you need to design a customer ordering application to publish a message to an SNS topic, whenever an order is placed for a product, then any SQS queues that are subscribed to that SNS this topic will receive identical notifications for their order. That's perfect for that scenario. So you could have an EC2 server instance attached to one of the SQS queues, which will handle actual processing of the order. And you can also have an SQS queue to handle notifying the customer that the order has commenced. You could also attach another EC2 server instance to analyzing the order for patterns and behaviors and have activities based on that. So that's a really scalable way of handling parallel asynchronous processing, even with decoupled multi-tiered web applications, if you're using all the best practices like DynamoDB, SQS and EC2 to decouple layers, then there's always small improvements that can be made.

So it can be common when you're decoupling to see slight delays in processing, which may not be optimal for processing something like our customer web orders. One option to improve performance immediately is to use EC2 auto scaling to scale out the middle tier instances based on the SQS queue depth. That can improve performance immediately.

Now, if you are experiencing a very high volume of incoming messages, let's say 100,000 a second or more and there are multiple microservices consuming the messages then it could be an option to implement Kinesis Data Streams using a single shard. That may be more performant to have the microservices read and process messages from the Kinesis Stream. If your system is suffering performance degradation due to heavy read traffic or say people running reports, then migrating the database to RDS is always an option. It's always gonna give you an a performance increase and especially if you're experiencing a lot of read only SQL queries, then adding a read replica can improve performance with minimal changes to your existing applications.

If you've implemented a NoSQL database like DynamoDB then read performance can be increased by adding auto scaling to the table and adding global indexes. You can also consider as another layer adding Elasticache as a cache. That can also improve read performance. Okay, not write just read performance. And if you need features in that cache, then think Redis. If you just need speed, think MemcacheD. All right, so those are a few design patterns to remember and keep in mind.

Let's see progress. So let's tackle a few questions scenarios to get you ready for passing architecture based questions. So, first question, you are in a company meeting serving as a lead architect. Your company wants to into the cloud by creating an area for low latency and high throughput of up to 10 gigabits per second. So you recommend using a cluster placement group. Now what features in your design should be in place to take full advantage of a cluster placement group? And we need to choose two options. Now we can discount a few of these options straight away. Starting at the bottom, a VPN provides a connection between two or more points. It doesn't improve throughput or reduce latency so we can get rid of a VPN as an option straight away. Now reserved instances, as you know are EC2 instances that we buy based on a bid price or a time of use. So reserved instances do not improve network performance either. So we get rid of both of those two. We do however want to consider EC2 instance types that support enhanced networking. So that option looks good. So that leaves the first two options which present single or multi-tiered AZ placement. Okay, so remember a placement group is a logical grouping of instances within a single availability zone. So cluster placement groups are recommended for applications that benefit from low network latency, high network throughput or both. So to provide the lowest latency and the highest packet per second network performance for your cluster placement group in this scenario, we should be choosing an instance type that supports enhanced networking and grouping them in a single availability zone. So the correct options for this question are option A and option C.

Okay, onto the next question. A client needs you to design a content immediate server that requires low latency, high availability, durability at excess control. All right, so those are all keywords that we need to take note of. Once launched the customer base for the server is expected to grow and become more geographically distributed, which of the following options would be a good solution for content distribution? Here's where we need to keep thinking about the purpose of each AWS service and applying it to the correct requirement. Now, Amazon S3 is a resilient object store. So it's perfect as an origin server, right? It's not designed to cache contents so we can discount option A straight away. Using a high-performance edge cache can provide substantial improvements in latency, fault tolerance and costs. So CloudFront should definitely be part of the solution. By using Amazon S3 as the origin server for the CloudFront distribution, you gain the advantages of fast in network data transfer rates. So we wanna use those two together, that discounts option B. We don't wanna use CloudFront as both the origin and the cache. That's not how it works, right? We certainly don't wanna consider option C 'cause AWS Storage Gateway is a backup and retrieval service. It's not going to work as an origin server and we certainly wouldn't have to use Amazon EC2 for caching because Amazon EC2 is instances, right? So the best option here is option D.

Okay, next question. You are designing the compute resources for a cloud native application that you want to host on AWS. You want to configure auto scaling groups, good, to increase the compute layer's resilience and responsiveness. However, your IT management team has one concern, they want the ability to optimize the Amazon EC2 instances within an auto scaling group as easily and with as little downtime as possible after it has been launched. Okay, so this may sound a little crazy but often people want to do this. So this is basically saying, can we change the auto scaling at will if we want once auto scaling started. So what is the best way for you to meet this requirement?

Okay, now's the best time to remember that a launch configuration is that a template that the auto scaling group uses to launch Amazon EC2 instances. So you create the launch configuration by including information such as the Amazon machine image ID to use for launching the EC2 instance, the instance type, key peers, security groups and blocked device mappings among other configuration settings. So that's what the template does for you. When you create your auto scaling group you must associate it with a launch configuration and you can attach only one launch configuration to an auto scaling group at any time.

Launch configurations cannot be modified, so they're immutable. So if you want to change the launch configuration of your auto scaling group, you have to first create a new launch configuration and then update your auto scaling group by attaching the new launch configuration. When you attach a new launch configuration to your auto scaling group, any new instances are launched using a new configuration parameters. Existing instances are not affected. So launch templates work very differently. You can have multiple versions of a launch template saved and disassociate one version from an auto scaling group and replace it with another without having to delete any existing templates or stop or restart your auto scaling group. So straightaway option A won't work. Option B is close but not correct as it talks about a launch configuration that's a red herring. Option C is closer but it talks about deleting the template and doesn't actually attach the template to the group. So it doesn't quite match what we want. Option D is the best option as it is assigned to the auto scaling group. So basically this feels, having it associate the auto scaling group with a launch template. When you need to update instances within the auto scaling group create a new version of the launch template and assign it to the auto scaling group. That's the correct option, option D.

Okay, next question. The company you work for has a PHP application which is growing quickly and also needs to be highly available. For now there are small burstable general purpose instances in the VPC. You are asked to make it elastic as well. So you implemented an auto scaling group and you have instances in multiple Availability Zones for high availability. The application is requiring unexpected compute resources. So what other steps could you take to optimize this environment? Choose two answers. The best option for this would be to decouple this application into microservices that can be scaled independently but that's not an option in this scenario. So we have to work with what we've been given.

We are dealing with a monolithic app here that is most likely a single or two tier design. That isn't really helping us as far as we're told the tiers are not decoupled to enable more granular scaling. We've already added the ability to scale out by implementing auto scaling and made it more resilient by using multiple AZs. This is now really down to vertical scaling, the underlying hardware as best we can. We can discount a few options creating a second auto scaling group doesn't help performance. We already have multi-AZ working for us with the auto scaling implemented. So there's no real benefit in creating another AZ. Offloading read traffic is only going to work if we know read requests are the performance bottleneck. We don't know that so that's not a good option out of the gate.

Now being a monolithic design is not wrong by any means, it is actually one of the fundamental benefits of AWS. As you need to change, you might find that your instance is over-utilized, the instance type is too small or underutilized, the instance type is too large. If this is the case, you can change the size of your instance, this is known as resizing. So that's what we want to do here and the two options we wanna choose, we wanna launch larger instances and remove some of the smaller ones. That is definitely something we should consider. Creating a second auto scaling group for high availability won't do anything for in terms of improving our performance. Creating a multi-AZ environment to offload read traffic isn't going to work either. It could if we had been given more detail about the type of performance bottleneck, if it was read requests then adding some sort of cache, either a Lustrecache or providing a read replica would be an option. Given the information that we have, option four, launch some compute optimized instances is the next best option we have. This is vertical scaling, we just wanna get as much out of the underlying hardware as we can.

Okay, next question. So a solutions architect is designing the AWS resources for a wedding cake website that includes static elements for its cake designs, menu and pricing, as well as dynamic elements for interactive design your cake tools. So this website will be hosted on a two-tier application with a web tier hosted on Amazon EC2 instances and a Postgres database tier. The solutions architect needs to ensure the web tier which requires two EC2 instances to provide service will remain available in the event of a zonal failure. The solutions architect must also select the best storage solutions for the websites, dynamic and static elements. How should the solutions architect cost-effectively design the solution to offer both high availability and optimize storage performance?

Now, we're getting into one of these kind of questions here where there's a lot of information to process and building you up slowly to get ready for these types of questions, okay? So there's two things that are important here, high availability and optimize storage performance. So those two things in the most cost effective way. So the first option, deploying two EC2 instances into separate Availability Zones behind an Application Load Balancer. All right, so that's one option. Store these static web content in Elastic Block Store. No, well, that's the first thing that tells us that the solution was wrong because we don't wanna store static web content in EBS. That's not a good solution for what we need it. Second option is deploy four EC2 instances evenly in two separate Availability Zones, okay. Attach them to an auto scaling group behind an Application Load Balancer. Okay, that's quite a resilient probably a little bit over-provisioned. And does that really give us the most cost effective compute?

Our storage options, store the dynamic web content in Amazon RDS Postgres with multi-AZ configuration, store the static web content in Amazon EFS with separate mount targets in each availability zone. That's not a good use case for EFS, so it's not gonna serve very well in that situation. Third option, deploy three EC2 instances evenly in three separate Availability Zones, okay, I like that. Attach them to an auto scaling group behind the Application Load Balancer. All right, so of the three options so far that looks the most cost-effective in my view, certainly the most resilient having three Availability Zones.

Okay, so we're using all of the resilience and high availability available to us in the AWS Cloud and then for our storage option, store the dynamic web content in Amazon RDS Postgres instance with a multi-AZ configuration. Yep, that's gonna be highly resilient, cost-effective as well. Configure a CloudFront distribution, yes. CloudFront is the way that we should be caching and distributing this, especially for the type of static content we have. So configure a CloudFront distribution with a S3 bucket origin that contains a static content. Yep, so that's the first real static content solution that I like out of all these.

Okay, so that's looking pretty good. Option D, assign four EC2 instances evenly in two separate regions. Well, straight away that's giving me less availability than my last option, attach them to an auto scaling group behind the Application Load Balancer. So that is not as highly available as option C. In terms of storage store the dynamic web content in Amazon Aurora database cluster. Yeah, so that means actually doing a little bit of redevelopment. So I don't know whether that's actually gonna be the most cost effective design. Postgres is, there's nothing wrong with Postgres, there's no reason for us to move out of that.

Configure a CloudFront distribution with an S3 bucket origin that contains a static content. Hmm, okay so it's basically going against Amazon Aurora database cluster versus Amazon RDS Postgres instance with multi-AZ configuration. Both of those two database options are good but I like the three EC2 instance evenly versus assigning four instances over two separate regions. Like that's not an in fact a very good option at all. So that's not providing a lot more, it doesn't work like that. We have to go across multi Availability Zones, not multi-region. So while all four of these are testing, option C is the correct one in my view but these are good practice questions for getting yourself in the mode for these kinds of responses.

Okay, a solution architect is designing the network using IPv4 protocol for a new three-tier application with a web tier of EC2 instances in a public subnet. A application tier on EC2 instances in a private subnet and a large RDS MySQL database instance in a second private subnet behind an internal load balancer. The web tier will allow inbound request using the HTTPS protocol and the application tier should receive requests using the HTTP protocol from the local network but must communicate with public end points on the internet without exposing its public IP addresses. The RDS database should specifically allow both inbound and outbound traffic requests made on port 3306 from the web and application tiers but explicitly deny all inbound and outbound traffic over other protocols and ports. So the real question here is what stateless network VPC security components should the solution architect configure to protect the application tier. All right, application tier.

Okay, so our options are first, deploy an internal load balancer between a web tier and the application tier subnets. Deploy a NAT gateway in the public subnet containing the web tier instances. Deploy an egress-only internet gateway associated with your VPC and deploy an internet gateway associated with your VPC. Right, so this question requires familiarity as you have with the variety of VPC network components think back to our networking section. So our NAT gateway is the right choice for an application tier using IPv4 protocol because it can forward messages from the instances with a private IP address and then receive the responses and direct them back to the same private IP addresses.

One other key point to mention is that the NAT gateways are deployed into public subnets and then route requests to private IP addresses in private subnets. So you might expect them to be deployed into a private subnet given their role within private IP addresses but that is not the case in this description. So the best option here is to deploy a NAT gateway in the public subnet containing the web tier instances. An internal load balancer is deployed to help direct local traffic between resources within the VPC. It cannot assist a network address translation like the NAT gateway or NAT instance. So it wouldn't be of any use in this scenario. So the best option here is option B. Now, this is quite an interesting scenario because if this was using IPv6, instead of IPv4, you would use an egress-only internet gateway. It is deployed in front of a VPC in the place of the internet gateway, not in a public subnet like the NAT gateway and internet gateway assigned to a subnet route table is what makes the subnet public and allows traffic to reach the public internet outside of the VPC. So NAT gateways are deployed to public subnets, egress-only internet gateways are deployed in front of a VPC, just like the internet gateway for IPv6. Okay, not IPv4 so look out for that.

Okay, next question. So a company is using Amazon Aurora MySQL as a database for a read-intensive workload, right? So we're suddenly been given here a little bit more detail and read-intensive. Great, good to know. Users in a particular region have started to report high latency, the solutions architect has been asked to improve the database performance. Which of the following steps can the solution architect take to improve response times in the region? Option one, decrease the max_connections parameter for each Aurora DB instance. Increase storage of the Aurora cluster, add an Aurora read replica in a designated region, well, straightaway it's leaping out at me. Configure a multi-AZ deployment. So the best solution in this scenario is to create an Aurora read replica in the designated region to improve read scalability. Remember we said that the problem here the keyword here is read-intensive workload. Using read replicas is an appropriate choice for read-intensive workloads. So keep that in mind.

Aurora manages the read replication using MySQL Binlog based replication engine. So it's all done this part of the service. Very, very easy to set up. Confirming a multi-AZ deployment would not help with this read performance, but it is something to consider to provide high availability, right? But in the context of this question, we know that it's read-intensive so it's not really gonna solve this particular scenario. Decreasing the max_connections for each Aurora DB instance can influence performance, default for an instance max_connections is tuned to work well with the default settings for the buffer pool and query cache memory sizes. So since this has not been indicated in this problem scenario, changing parameter is not the appropriate action in my view. Be no need to increase the size of the Aurora cluster storage. As the database grows Aurora automatically expands the cluster volume size that's why it's such a great service. Yeah, okay so the best option here is option C, add an Aurora read replica in the designated region.

Okay, last question. A company's container applications run on CRI-O containers hosted on on-premises virtual servers. The engineering team wants to migrate these applications to an AWS service, underline that, that supports these container types and provides open source container orchestration so that developers can migrate applications back to the on-premise data center if necessary. Okay, the development team needs to be able to customize the host instances and gain SSH access into them as well but it would be beneficial if the servers removes some of the administration workload such as patching and updating host instance operating systems. Okay, so we wanna get rid of some of the administrative overhead, we want a managed service basically. Something that can take over some of this provisioning. So which of the following options would a solution architect recommend to meet these requirements? This is about choosing the right service for this particular requirement, okay? It's quite granular at this point 'cause there's three options there that really could solve this. ECS, EKS and AWS Fargate. The idea of moving the option two, EC2 instances and having open source container orchestration, that's fine but it doesn't remove any of the administrative workload, right? So we can discount option A. The best choice for this company, I think is to migrate their container applications to Amazon EKS, which supports CRI-O containers with an open source container orchestration service. And it provides the right balance of flexibility and AWS management. So we get all the benefits of the managed service plus we get support for the type of environment that we have. Now AWS Fargate would not allow the same level of customization and flexibility. Okay, so that's while it's an option, it's not the best option. And Amazon ECS does not offer a proprietary container orchestration service. And Amazon EC2 instances would not provide any level of managing the host instances. Yeah, okay, so good granular question. There again, it's just remembering the use cases for these services. We're getting better and better prepared, we're getting closer to passing this exam, let's move on.

About the Author
Students
167965
Courses
72
Learning Paths
172

Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built  70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+  years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.