1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Designing Multi-Tier Architectures

Applying Our Knowledge - Sample Questions

play-arrow
Start course
Overview
DifficultyIntermediate
Duration53m
Students386
Ratings
4.6/5
star star star star star-half

Description

Course Introduction
Domain One of The AWS Solution Architect Associate exam guide SAA-CO2 requires us to be able to Design a multi-tier architecture solution so that is our topic for this course.
The objective of this course is to prepare you for answering questions related to this domain. We’ll cover the need to know aspects of how to design Multi-Tier solutions using AWS services.

Learning Objectives
By the end of this course, you will be well prepared for answering questions related to Domain One in the Solution Architect Associate exam.

Architecture Basics 
You need to be familiar with a number of technology stacks that are common to multi-tier solution design for the Associate certification- LAMP, MEAN, Serverless and Microservices are relevant patterns to know for the exam. 

What is Multi-Tier Architecture?
A business application generally needs three things. It needs something to interact with users - often called the presentation layer -  it needs something to process those interactions - often called the logic or application layer - and it generally needs somewhere to store the data from that logic and interactions - commonly named as the data tier.

When Should You Consider a Multi-Tier Design?
The key thing to remember is that the benefit of multi-tier architecture is that the tiers are decoupled which enables them to be scaled up or down to meet demand. This we generally call burst activity and is a major benefit of building applications in the cloud

When Should We Consider Single-Tier Design?
Single tier generally implies that all your application services are running on the one machine or instance. Single-Tier deployment is generally going to be a cost-effective and easy to manage architecture but speed and cost is about all there is for benefits. Single tier suits development or test environments where finite teams need to work and test quickly. 

Design a Multi-Tier Solution 
First we review the design of a multi-tier architecture pattern using instances and elastic load balancers.  Then we’ll review how we could create a similar solution using serverless services or a full microservices design.

AWS Services we use 

The Virtual Private Cloud
Subnets and Availability Zones 
Auto Scaling 
Elastic Load Balancers 
Security groups and NACLs
AWS CloudFront 
AWS WAF and AWS Shield 

Serverless Design 
AWS Lambda 
Amazon API Gateway 

Microservices Design 
AWS Secrets Manager 
AWS KMS 

Sample Questions
We review sample exam questions to apply and solidify our knowledge. 

Course Summary 
Review of the content covered to help you prepare for the exam. 

 

 

Transcript

Okay, so let's tackle the question we had right at the beginning of this course and see how our understanding is now. So here's the question, a company runs a public facing three-tier web application, right? So straightaway, we're talking about a multi-tier, three-tier web application in a VPC, great across multiple availability zones, right? So it's built for high availability and resilience. Amazon EC2 instances for the application tier, running in private subnets need to download software patches from the internet. 

Okay, so that's outbound egress, right? We wanna go outbound through the internet gateway. However, the instances cannot be directly accessible from the internet, right. So straightaway, what are we talking about here? We are talking about a NAT gateway ain't we? We're not talking about the internet gateway, with a instance within a public or private subnet without an internet IP address, right. So which actions should be taken to allow the instances to download the needed patches. We're gonna select two. 

So first, configure a NAT gateway in a public subnet. Yes, that is exactly what we need to do. All right, first one absolutely. Second, define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier. Hmm, yes, remember that what we need to do, whenever we have egress or ingress within inside our VPC, we need to create a route table for that. So until we have those routes set up, there's no way that the machine can see out. So yes, that is one of our options. C, assign Elastic IP addresses to the application instances. No, because the minute that we assign Elastic IP address, assuming there's a route to that already, that means that traffic inbound from our internet gateway will be able to see those machines which we don't want. Define a custom route table with a route to the internet gateway for internet traffic and associated with the private subnets for the application tier. 

Now, that's slightly different from our option B and we don't want to have the route to the internet gateway for internal traffic, we want outbound traffic to traverse the NAT gateway. So that option D is not correct, because that would allow traffic inbound as well. E, configure a NAT instance in a private subnet. No, because we need our NAT instances to be able to see the internet gateway, so they have to be in a public subnet. Okay, so the NAT gateway forwards traffic from the instances to the private subnet to the internet or other AWS services and sends the response back to the instances. After the NAT gateway is created, the route tables for private subnets must be updated to point internet traffic to the NAT gateway. Okay, next question. A customer relationship management CRM application runs so it's all irrelevant, don't care about that. Runs on EC2 instances, care about that, in multiple availability zones, care about that, behind an Application Load Balancer, okay? A clever load balancer works at level seven, and is able to contextualize requests. So it's quite smart. And so it knows what the request type is. 

It's not like the network load balancer, which is really just about terminating traffic at level four. Right, level seven super smart Application Load Balancer. Good for micro services, good for containers, good for smart routing. Now, if one of these instances fails, what occurs okay? The load balancer will stop sending requests to the failed instance. Yes, because remember, this is actually not about the type of load balancer, this is about load balancers in general. So remember, the job of the load balancer, it's a first responder, right? It's not surgeon or a doctor. So it, it basically tests whether an instance is receiving traffic or not, and if it's receiving traffic, it sends traffic to it. If it's not receiving traffic, then it just sends traffic to the next instance. The load balancer does not act like a doctor, go up and check the health of the instance, take its temperature, work out what's wrong with it, and then try and fix it, all right? It doesn't do that, it just literally sends it to the next instance, that first instance, is not receiving traffic. So the first step first option that's true, right? How many do we have to choose here? Just one, okay, so option B, the load balancer will terminate the failed instance. See, that's asking it to be a doctor. It is not a doctor, right? It will not terminate the failed instance that does not have that power. Load balancers are quite stupid, they just literally send traffic, all right? They're very good at it, very good at it, but they don't terminate instances. The load balancer will automatically replace the failed instance. No, no, no, no, that, again is asking way too much of the simple, simple service which is designed to just absorb traffic and redirect it to a healthy instance, right? One that's receiving traffic. The load balancer will turn a 504 gateway timeout error until the instance is replaced. No, it won't do a 504 and that's not as job either. So basically, it'll send it to the next healthy instance if the instance isn't receiving it. So it's certainly not gonna sit there waiting until an instance, is replaced. Replaced by whom? Maybe it's the auto-scale group configuration that manages instances. Okay, so that's got nothing to do with the load balancer. All right, so I think option A is our correct one here.

About the Author

Students79411
Courses96
Learning paths48

Head of Content

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups. 

Covered Topics