1. Home
  2. Training Library
  3. Serverless, Component Decoupling, and Solution Architectures (SAP-C02)

Design Components - EC2 and Elastic Load Balancers

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW2m 26s
Utilizing Managed Services and Serverless Architectures to Minimize Cost
Decoupled Architecture
Amazon API Gateway
10
Advanced API Gateway
PREVIEW11m 29s
Amazon Elastic Map Reduce
18
Introduction to EMR
PREVIEW1m 46s
Amazon EventBridge
26
EventBridge
PREVIEW7m 58s
Design considerations
39

The course is part of this learning path

Start course
Difficulty
Intermediate
Duration
4h 43m
Students
80
Ratings
3/5
starstarstarstar-borderstar-border
Description

This section of the AWS Certified Solutions Architect - Professional learning path introduces common AWS solution architectures relevant to the AWS Certified Solutions Architect - Professional exam and the services that support them. These services form a core component of running resilient and performant architectures. 

Want more? Try a Lab Playground or do a Lab Challenge!

Learning Objectives

  • Learn how to utilize managed services and serverless architectures to minimize cost
  • Understand how to use AWS services to process streaming data
  • Discover AWS services that support mobile app development
  • Understand when to utilize serverless services within your AWS solutions
  • Learn which AWS services to use when building a decoupled architecture
Transcript

So another AWS service that we're going to use for our multi-tier design is the Elastic Compute Cloud or EC2. So these will be our instances that we'll use to run the applications for each tier. We will also be using the auto-scaling group, which will be the configuration and the rules for scaling these EC2 instances within our tiers. And we'll use the thing called the Elastic Load Balancer. These will provide more resilience and security in our design.

Elastic Load Balancers sit in front of an instance group. Their job is to detect whether an instance is healthy or not. And if it is healthy, then send traffic to it. Now, but that's it, right? So, Elastic Load Balancers just test if an instance is ready to receive requests. But they're not doctors. So Elastic Load Balancers don't start up, stop, or quarantine instances, alright.

So they're not there to ascertain whether it's actually healthy or what's wrong with it. They just want to know if it's receiving traffic, send traffic to that instance. If their instance isn't, then send it to the next instance. When you need to manage the volume of traffic between your tiers, think elastic load balancing. So Elastic Load Balancers are designed to scale, to handle burst requests. And they help spread requests over a fleet of instances. ELBs ascertain if the instances healthy, and if it is, send traffic to it.

Now if the instance is not healthy, send requests to the next instance, that's all they do. Okay, so there are three types of Elastic Load Balancer, you need to be aware of. There's a Network Load Balancer, the Application Load Balancer and the Classic Load Balancer. So the Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic. And it provides advanced request routing. It's really good for a microservices and containers.

So the Application Load Balancer operates at the individual request level, layer seven. So it's got quite a few smarts in there. It's very contextual. It can tell what the content of the request is. So the Application Load Balancer routes traffic to targets within your VPC based on the content of the request. So that makes it an ideal front end between application tiers, okay. Very smart routing traffic. Now the Network Load Balancer, is great for terminating inbound traffic. So ideally, it can sit between your front end, your presentation layer and inbound internet traffic.

So it's built for super high performance. So acts very well as the gatekeeper to inbound traffic to your presentation layer on your business app. The Network Load Balancer is designed to Load Balancer Transmission Control Protocol, or TCP. User Datagram protocol, UDP, and the Transport Layer Security traffic or TLS. The Network Load Balancer operates at the connection level, which is layer four. So it routes traffic to targets within your VPC, e.g. your front end application. It's capable of handling millions of requests per second while maintaining really low latency. It's optimized to handle sudden and volatile traffic patterns.

We mentioned access control as a consideration between our layers, right. So a benefit of a Network Load Balancer, is that it can be tightly integrated with security rules. So if you want to screen manage or reject inbound requests that resemble known DDoS attacks, then the Network Load Balancer blocks that by default. Now that services managed by AWS. It's up-to-date with the latest threats, impossible compromises. The other plus with a Network Load Balancer, is that it maintains the latest SSL policies and versions. Which means that you don't have to.

So the Network Load Balancer isn't free, so it comes at a cost. And you have to factor the cost versus the value it brings in your design. Including a Network Load Balancer and your design takes care of the SSL cipher maintenance, which is a big overhead. And it's a big part of network security. So for any internet facing service, I said, the Network Load Balancer is usually a must. It directs the traffic to our presentation layer. It turns away the gatecrashers and the bad guys without ever getting overwhelmed. No matter how many people try and storm through your internet gateway, that Network Load Balancer is just sifting them out and getting rid of the ones that we don't want.

Now, the Classic Load Balancer, provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and the connection level. So both level seven and level four. Now the Classic Load Balancers are tended for applications that were built using the EC2 classic network, which is kind of the old version of AWS. So it's kind of an exception when you might use this. So you'd only really look to use the Classic Load Balancer, if you're perhaps migrating an application to AWS, where you've built your front end and database tiers in AWS. Maybe even using lambda or API gateway, but you have an application or component that's been on AWS for a long time. So it's running on the EC2 Classic Instances.

For a truly highly available service, we'd consider putting the Application Load Balancer between our front end tier in our logic tier. So this can create an additional layer of resilience, if the logic layer needs to scale up and down to meet a sudden rush of requests. So if you have an Application Load Balancer, then the presentation tier will send the inbound requests to the Application Load Balancer. And the Application Load Balancer will decide which instance of the logic tier will handle that request.

So remember, the load balancers are literally just going to test whether the instances health receiving traffic. It doesn't know whether it's healthy or not. It just knows whether it's receiving traffic or not, right? If it's receiving traffic, then it sends that request to that instance. If it's not receiving requests, then it sends it to the next one in the group, okay. Now, that design gives us way more resilience and security. So the benefit of using a load balancer is that it's a managed service. So most of these things are taken care of for you. Whether you use a load balancer or not, you still need to create security groups to control access to the services that you are running. And the network access control lists, control access to the environment that you're running in. So here's a common three-tier architecture for hosting a web application.

Now here we have three tiers. We've got our web tier, our application tier, and our database tier. Now you can have as many tiers as you want. The common pattern is public, private and restricted. The tiers are optional. For example, you may need only public and private tiers in a VPC. Each subnet is in a different availability zone to increase their resilience. So our three tiers, our Web tier, is essentially our presentation layer. Which is made up of EC2 instances located within two subnets.

Now the instances are managed in an auto-scaling group. so they can scale depending on demand or inbound traffic. Our application tier hosts our application layer. Which again is spread across our two subnets, in an auto-scale group, managed by an auto-scale configuration policy. We have our Data tier. And in this design, we are leveraging the Amazon relational database service, RDS, for our database services. We're using Amazon Aurora as our permanent data store. And we're also using Amazon elastic cache to cache service sessions.

Now as a managed service, Amazon Aurora is replicated across three or four availability zones by default. So it is a highly resilient, persistent data store. Now Amazon elastic cache is a very fast temporary data score. It's perfect for housing temporary or non-persistent data such as server sessions. Again, it's a managed service, which we can basically, the provisioning is taken care of for us by AWS. Now, you'll notice that I've got Elastic Load Balancers between each layer of our cake. Now the first ELB sits in front of our Web tier, and that's outdoor security. This keeps out the riffraff and directs guests to our web tier.

Now we could use a Network Load Balancer or an Application Load Balancer here. Both the Network Load Balancer and the Application Load Balancers are able to handle a high number requests with minimal latency. Remember, the Network Load Balancer works at level four. So it's great for offloading TCP and UDP traffic. Whichever load balancer you use depends on how much functionality you want from it. And this example, we've also integrated a few other AWS services.

So we're using Amazon S3 to store static images. And we're using Amazon CloudFront content delivery network to cache content to points of presence around the internet. Now we're using Amazon route 53 to manage our domain names, and to direct traffic to the appropriate service in our stick. Now, I would go for Application Load Balancer in this design. 'Cause the Application Load Balancer receives requests from route 53 and CloudFront, and directs requests to host in the group of instances. So it's able to do more contextual routing.

Now, our second Elastic Load Balancer sits between our Web tier and our Application tier. And this can be tightly integrated with security group rules. So that requests are only received from the IP address range in our public subnet. So ideally, we want to divide our infrastructure into separate layers; One public and one or two private layers using subnets. The design we want to achieve is that the public layer acts as a shield to the private layers. Anything in our public subnet is publicly accessible, i.e. it is attached to an internet gateway.

Private subnets are only accessible from inside the network. As a web application, we are looking for the most network resilience possible. So we have a few additional options included. We've integrated Amazon CloudFront with the Amazon web application firewall. So Amazon Web basically acts as a web application firewall to filter inbound traffic for any other attacks, denial of service attacks, known attacks etcetera. We can also leverage AWS shield, to reduce the risk of DDoS attacks.

Now both of these are managed services. They run as a pay per use basis. So if there's no budget constraints, they should be part of your design for a web application. Now, when both public and private subnets are specified, you can also include a NAT gateway. So when a NAT gateway is specified, remember that's Network Address Translation service, which allows instances within public or private subnets without an Elastic IP address, to egress out through the internet gateway by using Network Address Translation.

So when a NAT gateway is specified, you can either specify a single NAT gateway in the first public subnet, or you could specify that a NAT gateway be provisioned in each public subnet, and therefore in each availability zone. Now I've also included VPC flow logs because that gives us a lot of visibility into traffic a very good way of improving security. Now, you can create your VPC manually, from the console or command line, or you do it using cloud formation. And cloud formation is a very easy tool to use and by far the quickest and most efficient way to provision infrastructure at scale.

About the Author
Students
37985
Courses
26
Learning Paths
20

Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.