1. Home
  2. Training Library
  3. Architecture (SAA-C02)

EXAM PREP - Architecture

The course is part of this learning path

Start course
Overview
Difficulty
Beginner
Duration
1h 55m
Students
18
Ratings
5/5
starstarstarstarstar
Description

Domain One of The AWS Solution Architect Associate exam guide SAA-CO2 requires us to be able to Design a multi-tier architecture solution so that is our topic for this section.
We cover the need to know aspects of how to design Multi-Tier solutions using AWS services. 

Want more? Try a lab playground or do a Lab Challenge!

Learning Objectives

  • Learn some of the essential services for creating multi-tier architect on AWS, including the Simple Queue Service (SQS) and the Simple Notification Service (SNS)
  • Understand data streaming and how Amazon Kinesis can be used to stream data
  • Learn how to design a multi-tier solution on AWS, and the important aspects to take into consideration when doing so
 
Transcript

Hello, Andy here to help you get ready for acing questions on design and architecture. Let's talk design patterns. First migration. If you're migrating from VMware then you probably should consider the AWS DataSync, it's probably the best option. If you're migrating large amounts of objects, say to S3, or EFS, or Amazon FSx for Windows File Server, then consider AWS DataSync. AWS DataSync has encryption built-in and it also gives you IAM support. And you need to have a really good network performance to use DataSync, so you probably need to consider using it with AWS Direct Connect so you've got that throughput. And if you don't have the throughput, and it's a really large volume, say over 50 terabytes, then you're really better using the AWS Snowball device. 

o if you're migrating data from an Oracle or Microsoft Sequel Server database then it's a good option to consider the AWS database migration service. Frankly, it doesn't tend to crop up much as an exam scenario. But just remember that the database migration service helps you migrate data from any of the common database platforms, e.g. Oracle, Microsoft sequel server, or MySQL to Amazon RDS. So it's a very simple way to transfer schemas and to migrate the data itself, as that can migrate say an Oracle database to Oracle on Amazon RDS in a multi-AZ deployment.

Also always remember your storage options. Often the design requirements specify the need for maximum IO performance. So if a solution needs storage I always say interpret that to mean persistent storage unless it specifically says the storage does not need to be kept, all right. If it doesn't need to be kept then you can consider ephemeral storage, which is part of the EC2 instance, as that gives you a very high IO option. Otherwise think EBS with enhanced networking, compute, or memory it's going to be your best for maximum performance. And think for object storage, Amazon S3 is the best durable data storage. Glacier for your archives.

Now let's think about design patterns a bit. And remember, the benefit of multi-tiered architecture is the ability to decouple your layers so they can be independently scaled to meet demand thereby making the system more resilient and more highly available. So if you're asked to design a high-performance service that's going to be recording a lot of events or transactions as fast as possible. Then you want to consider implementing a multi-tiered design, right? So if it's burst unpredictable traffic, where you will also need to be able to say look up transactions or events using an ID, then most likely you should consider using DynamoDB with global secondary indexes. That's gonna give you the best performance, especially if you consider adding auto-scaling on the tables and global secondary indexes. That's likely to give you the best response for that type of data need.

Now if you're designing a high performance system, say for machine learning or for data crunching, and you need to access files internally, then think about Amazon FSx for Lustre. It's likely to provide the best performance for an internal high-performance file share. You can't share files from EBS, right? So Lustre's a really good fast solution for making large volumes available to more than one instance. And remember FSx for Lustre only runs on Linux. So if you're designing a solution for a Windows environment you need to use Amazon FSx for Windows File Server. Now Amazon FSx for Windows File server can be a really easy way to keep the same user permissions, say if you're' using active directory or accessing files on a Microsoft windows platform.

If you need your system to communicate on a specific port or in a specific way, say UDP for example, so then think Network Load Balancer. Network Load Balancer works on level four. Remember the Application Load Balancer has more features, works on level seven. So if your solution needs to be highly available and cost efficient then consider the Network Load Balancer for that need. Put the Network Load Balancer in front of EC2 instances in multi-AZs And remember to set up your auto scaling group to add or remove instances automatically 'cause that's going to keep your costs down.

Now a few common rearchitecting scenarios that I always come across in day-to-day architecture. Credentials stored in code, a common problem. So if you're doing a code review and you find database or instance credentials in the code, you need to get them out and to put them into something more secure. So the best solution for that is the AWS Secrets Manager. You'll need to use perhaps a Lambda function or a similar process to retrieve the credentials from AWS Secrets Manager. So if you do it that way remember you'll need, that function will need to run using an IAM role so it's running securely. Now Secrets Manager, best fit AWS KMS. Wouldn't be such a good option for credentials. It's a service designed for storing encryption keys and you wouldn't consider CloudHSM 'cause that's in the a hardware appliance so it actually attracts quite a high cost. It's well suited to your hybrid environments where you need store keys across on-prem and in the cloud. So that would be an overkill for managing secrets. 

If you need to improve performance in delivering an application or content globally, say your service is growing and you're reaching new markets, then you really need to consider Amazon CloudFront. That's likely to be you best option for delivering to a wider audience and improving performance. Certainly, if you need to geoblock or restrict access to some content, then think CloudFront. If you need to share content to a small group of people, say another team or an office, then you can use CloudFront for that. So using CloudFront signed URLs with a one-time token is a really good way to provide access to resources in a managed way. Certainly better than trying to set up IAM policies on is AS3 buckets, for example. But you can also block bad actors or IP addresses using CloudFront. You're better off using AWS WEF to manage that type of granular access control. But if you don't have WEF or CloudFront as options to restrict access then you can block access to resources from a specific CIDR range using network access control lists. That can block a range of IP addresses in the same CIDR of range. So it gives you some control, not as much as you would get from WEF or from CloudFront. If you use a NACLs too, as a deny method, then you need to add a deny rule to the inbound table of your network access control list. And ideally give it a lower number than any other rules that you might have in there.

As architects, we often get asked to meet compliance or auditing requirements. So remember that AWS CloudTrail is usually the best way to record API calls. And AWS config is often the best way to track configuration changes inside your VPC and to your account. As architects, we often get asked to increase or design for a performance. Decoupling architectures is the best way to start with that. SQS and SNS are perfect for decoupling layers in multi-tiered architectures. If you need to process tasks, like accepting a customer order, and perhaps sending a confirmation email, then using SQS and SNS is a very good way to manage that.

If you need to support parallel asynchronous processing, so for example, you need to have that process and that email sent at the very same time, then using the SNS fan-out method is a good way of doing that. That's when you have steps that you want to have processed simultaneously, i.e. at the same time in parallel. Fanout means having the message published to an SNS topic which is then replicated and pushed to multiple Amazon SQS queues. But it could be any end point, it doesn't have to be SQS. It could be a Lambda function or a Kinesis data Firehose, any HTTPS end point basically. So if you need to design a customer ordering application to publish a message to an SNS topic whenever an order is placed for a product then any SQS queues that are subscribed to that SNS topic will receive identical notifications for that order. That's perfect for that scenario. So you could have an EC2 to server instance attached to one of the SQS queues which will handle actual processing of the order. And you could also have an SQS queue to handle notifying the customer that the order has commenced. You could also attach another EC2 server instance to analyze the order for patterns and behaviors and have activities based on that. So that's a really scalable way of handling parallel asynchronous processing.

Even with decoupled multi-tier web applications, if you're using all the best practices like DynamoDB, SQS, and EC2 to decouple layers, then there's always small improvements that can be made. So it can be common when you're decoupling to see slight delays in processing, which may not be optimal for processing something like our customer web orders. One option to improve performance immediately is to use EC2 auto-scaling to scale out the middle tier instances based on the SQS queue depth. That can improve performance immediately.

Now if you are experiencing a very high volume of incoming messages, let's say a hundred thousand a second or more, and there are multiple microservices consuming the messages, then it could be an option to implement Kinesis data-streams using a single shard. That may be more performant to have the microservices read and process messages from the Kinesis stream. If your system is suffering performance degradation due to heavy read traffic, or say people running reports, then migrating the database to RDS is always an option. That's always going to give you a performance increase, especially if you're experiencing a lot of read only SQL queries, then adding a read replica can improve performance with minimal changes to your existing applications.

If you've implemented a NoSQL database, like DynamoDB, then read performance can be increased by adding auto scaling to the table and adding global indexes. You can also consider, as another layer, adding a ElastiCache as a cache. That can also improve read performance. Not write, just read performance. And if you need features in that cache then think Redis. If you just need speed, think Memcached. All right, so those are a few design patterns to remember and keep in mind. Let's progress.

Most of the questions you are gonna face around architecture are around choosing the right service, or combination of services, to meet the set of requirements that you're given. And you're gonna find your new knowledge on compute, storage, networking, and security all plays a big part of making the right decision. So let's tackle a few question scenarios to get you ready for passing architecture based questions.

So first question. You are in a company meeting, serving as a lead architect. Your company wants to expand into the cloud by creating an area for low latency and high throughput of up to 10 gigabits per second. So you recommend using a cluster placement group. Now what features in your design should be in place to take full advantage of a cluster placement group? And we need to choose two options. Now we can discount a few of these options straight away. Starting at the bottom. A VPN provides a connection between two or more points. It doesn't prove improve throughput or reduce latency. So that we can get rid of VPN as an option straight away. Now reserved instances, as you know, are EC2 instances that we buy based on a bid price or a time of use. So reserved instances do not improve network performance either. So we get rid of both of those two. We do however want to consider EC2 instance types that support enhanced networking. So that option looks good. So that leaves the first two options which present single or multi-tiered AZ placement. Okay, so remember, a placement group is a logical grouping of instances within a single availability zone. So cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. So to provide the lowest latency and the highest packet-per-second network performance for your cluster placement group, in this scenario, we should be choosing an instance type that supports enhanced networking and grouping them in a single availability zone. So the correct options for this question are option A and option C.

Okay, on to the next question. A client needs you to design a content and media server that requires low latency, high availability, durability, and access control, all right? So those are all keywords that we need to take note of. Once launched, the customer base for the server is expected to grow and become more geographically distributed. Which of the following options would be a good solution for content distribution. Here's where we need to keep thinking about the purpose of each AWS service and applying it to the correct requirement. Now Amazon S3 is a resilient object store. So it's perfect as an origin server, right? It's not designed to cache content so we can discount option A straight away. Using a high performance edge cache can provide substantial improvements in latency, fault tolerance and costs. So CloudFront should definitely be part of the solution. By using Amazon S3 as the origin server for the CloudFront distribution, you gain the advantages of fast in-network data transfer rates. So we want to use those two together. That discounts option B. We don't want to use CloudFront as both the origin and the cache. That's not how it works, right? We certainly don't want to consider option C 'cause AWS Storage Gateway is a backup and retrieval service. It's not going to work as an origin server. And we certainly wouldn't have to use Amazon EC2 for caching because Amazon EC2 is instances, right? So the best option here is option D.

Okay, next question. You are designing the compute resources for a cloud native application that you want to host on AWS. You want to configure auto-scaling groups. Good. To increase the compute layer's resilience and responsiveness. However, your IT management team has one concern. They want the ability to optimize the Amazon EC2 instances within an auto-scaling group, as easily and with as little downtime as possible, after it has been launched. Okay, so this may sound a little crazy but often people want to do this. So this is basically saying, can we change the auto-scaling at will, if we want, once auto-scaling started. So what is the best way for you to meet this requirement? Okay, now's the best time to remember that a launch configuration is a template that the auto scaling group uses to launch Amazon EC2 instances. So you create the launch configuration by including information such as the Amazon machine image ID to use for launching the EC2 instance. The instance type, key pairs, security groups, and blocked device mappings, among other configuration settings. So that's what the template does for you. When you create your auto-scaling group you must associate it with a launch configuration. And you can attach only one launch configuration to an auto-scaling group at any time. Launch configurations can not be modified. So they're immutable. So if you want to change the launch configuration of your auto-scaling group you have to first create a new launch configuration and then update your auto-scaling group by attaching the new launch configuration. When you attach a new launch configuration to your auto-scaling group, any new instances are launched using the new configuration parameters. Existing instances are not affected. So launch templates work very differently. You can have multiple versions of a launch template saved and disassociate one version from an auto-scaling group and replace it with another without having to delete any existing templates, or stop or restart your auto scaling group. So straightaway option A won't work. Option B is close but not correct, as it talks about a launch configuration. That's a red hearing. Option C is closer but it talks about deleting the template and doesn't actually attach the template to the group. So it doesn't quite match what we want. Option D is the best option as it is assigned to the auto-scaling groups. So basically this fills, having associated the auto-scaling group with a launch template, when you need to update instances within the auto-scaling group, create a new version of the launch template and assign it to the auto-scaling group. That's the correct option, option D. 

Okay, next question. The company you work for has a PHP application which is growing quickly and also needs to be highly available. For now, there are small burstable general purpose instances in the VPC. You are asked to make it elastic as well so you implemented an auto-scaling group. And you have instances in multiple availability zones for high availability. The application is requiring unexpected compute resources. So what other steps could you take to optimize this environment? Choose two answers. The best option for this would be to decouple this application into microservices that can be scaled independently. But that's not an option in the scenario. So we have to work with what we've been given.

We are dealing with a monolithic app here that is most likely a single or two tier design. That isn't really helping us. As far as we're told the tiers are not decoupled to enable more granular scaling. We've already added the ability to scale out by implementing auto-scaling and made it more resilient by using multiple AZs. This is now really down to vertical scaling the underlying hardware as best we can.

We can discount a few options. Creating a second auto-scaling group doesn't help performance. We already have multi-AZ working for us with the auto scaling implemented. So there's no real benefit in creating another AZ. Offloading read traffic is only going to work if we know read requests are the performance bottleneck. We don't know that so that's not a good option out of the gate. Now being a monolithic design is not wrong by any means. It is actually one of the fundamental benefits of AWS. As your needs change, you might find that your instance is over-utilized, the instance type is too small or underutilized, the instance type is too large. If this is the case you can change the size of your instance. This is known as resizing. So that's what we want to do here. And the two options we want to choose, we want to launch larger instances and remove some of the smaller ones. That is definitely something we should consider.

Creating a second auto-scaling group for high availability won't do anything in terms of improving our performance. Creating a multi-AZ environment to offload read traffic isn't going to work either. It could, if we had been given more detail about the type of performance bottleneck. If it was read requests then adding some sort of cache, either ElastiCache or providing a read replica, would be an option. Given the information that we have, the option for launch some compute optimized instances is the next best option we have. This is vertical scaling. We just want to get as much out of the underlying hardware as we can.

Okay, next question. So a solutions architect is designing the AWS resources for a wedding cake website. That includes static elements for its cake designs, menu, and pricing, as well as dynamic elements for interactive design your cake tools. So this website will be hosted on a two tier application with the web tier hosted on Amazon EC2 instances and a Postgres database tier. The solutions architect needs to ensure the web tier, which requires two EC2 instances to provide service, will remain available in the event of a zonal failure. The solutions architect must also select the best storage solutions for the website's dynamic and static elements. How should the solutions architect cost-effectively design the solution to offer both high availability and optimized storage performance?

Now we're getting into one of these kind of questions here where there's a lot of information to process and building you up slowly to get ready for these type of questions, okay. So there's two things that are important here, high availability and optimized storage performance. So those two things in the most cost effective way. So the first option deploying two EC2 instances into separate Availability Zones behind an Application Load Balancer. All right, so that's one option. Store the static web content in an Elastic Block Store. No, well that's the first thing that tells us that this solution is wrong because we don't want to store static web cont in EBS. That's not a good solution for what we needed.

Second option is deploy four EC2 instances evenly in two separate availability zones. Okay. Attach them to an auto-scaling group behind an Application Load Balancer. Okay, that's quite a resilient, probably a little bit over-provisioned. And does that really give us the most cost effective compute? Our storage options, store the dynamic web content in Amazon RDS Postgres with multi-AZ configuration. Store the static web content in Amazon EFS with separate mount targets in each Availability Zone. That's not a good use case for EFS, so it's not gonna serve very well in that situation. Third option, deploy three EC2 instances evenly in three separate Availability Zones. Okay, I like that. Attach them to an auto-scaling group behind an Application Load Balancer. All right, so of the three options so far that looks the most cost-effective in my view. Certainly the most resilient, having three availability zones.

Okay, so we're using all of the resilience and high availability available to us in the AWS cloud. And then for our storage option, store the dynamic web content in an Amazon RDS Postgres instance with a multi-AZ configuration. Yep, that's gonna be highly resilient, cost-effective as well. Configure a CloudFront distribution. Yes. CloudFront is the way that we should be caching and distributing this, especially for the type of static content we have. So configure a CloudFront distribution with an S3 bucket origin that contains the static content. Yep. So that's the first real static content solution that I like out of all these. Okay, so that's looking pretty good.

Option D. Assign four EC2 instances evenly in two separate regions. Well, straight away that's given me less availability than my last option. Attach them to an auto-scaling group behind the Application Load Balancer. So that is not as highly available as option C. In terms of storage, store the dynamic web content in an Amazon Aurora database cluster. Yeah, so that means actually doing a little bit of redevelopment. So I dunno whether that's actually going to be the most cost effective design. Postgres is, there's nothing wrong with Postgres. There's no reason for us to move out of that.

Configure a CloudFront distribution with an S3 bucket origin that contains the static content. Okay, so it's basically going against Amazon Aurora database cluster versus Amazon RDS Postgres instance with multi-AZ configuration. Both of those two database options are good. But I like the three EC2 instances evenly versus assigning four instances over two separate regions. That's not, in fact, a very good option at all. So that's not providing a lot more, it doesn't work like that. We have to go across multi-availability zones, not multi-region. So while all four of these are testing, option C is the correct one in my view. But these are good practice questions for getting yourself in the mode for these kinds of responses.

Okay. A solution architect is designing the network using IPv4 protocol for a new three-tier application with a web tier of EC2 instances in a public subnet, a application tier on EC2 instances in a private subnet, and a large RDS MySQL database instance in a second private subnet behind an internal load balancer. The web tier will allow inbound request using the HTTPS protocol. And the application tier should receive requests using the HTTP protocol from the local network, but must communicate with public endpoints on the internet without exposing its public IP addresses. The RDS database should specifically allow both inbound and outbound traffic, requests made on port 3306 from the web and application tiers, but explicitly deny all inbound and outbound traffic over other protocols and ports.

So the real question here is, what stateless network VPC security components should the solution architect configure to protect the application tier, all right? Application tier. Okay, so our options are first deploy an internal load balancer between the web tier and the application tier subnets. Deploy a NAT Gateway in the public subnet containing the web tier instances. Deploy an Egress-Only Internet Gateway associated with your VPC. And deploy an Internet Gateway associated with your VPC. Right. So this question requires familiarity, as you have, with the variety of VPC network components. Think back to our networking section. So a NAT Gateway is the right choice for an application tier using IPv4 protocol because it can forward messages from the instances with a private IP address. And then receive the responses and direct them back to the same private IP addresses.

One other key point to mention is that the NAT gateways are deployed into public subnets and then route requests to private IP addresses in private subnets. So you might expect them to be deployed into a private subnet given their role within private IP addresses, but that is not the case in this description. So the best option here is to deploy a NAT Gateway in the public subnet containing the web tier instances. An Internal Load Balancer is deployed to help direct local traffic between resources within the VPC. It cannot assist in network address translation, like the NAT Gateway or NAT instance, so it wouldn't be of any use in this scenario. So the best option here is option B.

Now this is quite an interesting scenario because if this was using IPv6, instead of IPv4, you would use an Egress-Only Internet Gateway. It is deployed in front of a VPC in the place of the Internet Gateway, not in a public subnet like the NAT Gateway, an internet gateway assigned to a subnets route table is what makes the subnet public and allows traffic to reach the public internet outside of the VPC. So NAT Gateways are deployed to public subnets. Egress-Only Internet Gateways are deployed in front of a VPC just like the internet gateway for IPv6, okay, not IPv4. So look out for that.

Okay, last question. So a company is using Amazon Aurora MySQL as a database for a read intensive workload. Right, so we've suddenly been given here a little bit more detail. And read intensive, great, good to know. Users in a particular region have started to report high latency. The solutions architect has been asked to improve the database performance. Which of the following steps can the solution architect take to improve response times in the region?

Option one, decrease the max_connections parameter for each Aurora DB instance. Increase storage of the Aurora cluster. Add an Aurora read replica in a designated region. Well straight away, that's leaping out at me. Configure a multi-AZ deployment. So the best solution in this scenario is to create an Aurora read replica in the designated region to improve read scalability. Remember, we said the problem here, the keyword here is read-intensive workload. Using read replicas is an appropriate choice for read intensive workloads. So keep that in mind. Aurora manages the read replication using MySQL's Bin Log based replication engine. So it's all done as part of the service. Very, very easy to set up.

Configuring a multi-AZ deployment would not help with this read performance, but it is something to consider to provide high availability, right. But in the context of this question we know that it's read intensive. So it's not really going to solve this particular scenario. Decreasing the max_connections for each Aurora DB instance can influence performance. Default for an instance max_connections is tuned to work well with the default settings for the buffer pool in query cache memory sizes. So since this has not been indicated in this problem scenario, changing this parameter is not the appropriate action, in my view. Be no need to increase the size of the Aurora cluster storage. As the database grows Aurora automatically expands the cluster volume size. That's why it's such a great service. Yeah, okay. So the best option here is option C. Add an Aurora read replica in the designated region.

Okay, next question. A company's container applications run on CRI-O containers hosted on on-premise virtual servers. The engineering team wants to migrate these applications to an AWS service, underline that, that supports these container types and provides open source container orchestration so that developers can migrate applications back to the on-premise data center if necessary. Okay. The development team needs to be able to customize the host instances and gain SSH access into them as well, but it would be beneficial if the service removes some of the administration workload, such patching and updating host instance operating systems.

Okay, so we want to get rid of some of the administrative overhead. We want a manageD service, basically. Something that can take over some of this provisioning. So which of the following options would a solution architect recommend to meet these requirements? This is about choosing the right service for this particular requirement, okay. It's quite granular at this point, 'cause there's three options there that really could solve this. ECS, EKS, and AWS FarGate. The idea of moving the option to EC2 instances and having open source container orchestration, that's fine but it doesn't remove any of the administrative workload. So we can discount option A. The best choice for this company, I think, is to migrate their container applications to Amazon EKS which supports CRI-O containers with an open source container orchestration service. And it provides the right balance of flexibility and AWS management. So we get all the benefits of the managed service. Plus we get support for the type of environment that we have. Now AWS FarGate would not allow the same level of customization and flexibility. So, while it's an option, it's not the best option. And Amazon ECS does not offer a proprietary container orchestration service. And Amazon EC2 instances would not provide any level of managing the host instances. Yeah, okay. So good granular question there. Again, it's just remembering the use cases for these services. We're getting better and better prepared. We're getting closer to passing this exam. Let's move on.

About the Author
Avatar
Andrew Larkin
Head of Content
Students
108809
Courses
99
Learning Paths
94

Head of Content

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.