The AWS exam guide outlines that 60% of the Solutions Architect–Associate exam questions could be on the topic of designing highly-available, fault-tolerant, cost-efficient, scalable systems. This course teaches you to recognize and explain the core architecture principles of high availability, fault tolerance, and cost optimization. We then step through the core AWS components that can enable highly available solutions when used together so you can recognize and explain how to design and monitor highly available, cost efficient, fault tolerant, scalable systems.
- Identify and recognize cloud architecture considerations such as functional components and effective designs
- Define best practices for planning, designing, and monitoring in the cloud
- Develop to client specifications, including pricing and cost
- Evaluate architectural trade-off decisions when building for the cloud
- Apply best practices for elasticity and scalability concepts to your builds
- Integrate with existing development environments
This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.
Basic knowledge of core AWS functionality. If you haven't already completed it, we recommend our Fundamentals of AWS Learning Path. We also recommend completing the other courses, quizzes, and labs in the Solutions Architect–Associate for AWS certification learning path.
This Course Includes:
- 11 video lectures
- Detailed overview of the AWS services that enable high availability, cost efficiency, fault tolerance, and scalability
- A focus on designing systems in preparation for the certification exam
What You'll Learn
|Lecture Group||What you'll learn|
Designing for High availability, fault tolerance and cost efficiency
Designing for business continuity
How to combine AWS services together to create highly available, cost efficient, fault tolerant systems.
How to recognize and explain Recovery Time Objective and Recovery Point Objectives, and how to recognize and implement AWS solution designs to meet common RTO/RPO objectives
|Ten AWS Services That Enable High Availability||Regions and Availability Zones, VPCs, ELB, SQS, EC2, Route53, EIP, CloudWatch, and Auto Scaling|
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Okay, it's time to review number seven on our high availability top 10, the Amazon Elastic Load Balancer. There are currently three types of load balancer you can deploy, a classic, the application and network load balancers but more on these later. Let's get the basics of what the ELB does for us first in the context of the Solution Architect's Associate Certification. The Amazon Elastic Load Balancer is an effective way to increase the availability of a system.
You get improved fault tolerance by placing your compute instances behind an elastic load balancer as it can automatically balance traffic across multiple instances and multiple availability zones to ensure that only healthy EC2 instances retrieve traffic and most important for the EC2 instances is that elastic load balancers can balance load across multiple availability zones by using cross zone load balancing. Now the elastic load balancer is a managed service and so, availability and scalability is managed for us just like with Amazon S3 and the Amazon Simple Queue Service. An elastic load balancer can be internal or external facing and load balancers ensure that requests are distributed equally to your backend instances regardless of the availability zone in which they are located. So, when combined with elastic load balancer's built-in fault tolerance, the elastic load balancing service can ensure your application runs in spite of any issues within a single availability zone. Applications can take advantage of this to become fault tolerant and self-healing. Okay, this is really important. The elastic load balancer does not terminate or start instances.
The elastic load balancer does not manage any actual scaling. That is done by auto scaling. An elastic load balancer detects the health of an instance by listening on a specified port and if the load balancer does not receive confirmation from an instance that it's running through that specified port, then the load balancer will direct traffic to another instance. Cross zone load balancing can also reduce the likelihood of client caching of DNS information that can result in requests being distributed unevenly. However, a sticky session may be something that you do want to support. Elastic load balancer supports the ability to stick user sessions to specific EC2 instances using cookies. Traffic will be routed to the same instances as the user continues to access your application. Sticky sessions are one possible way to maintain client state across a fleet of service where session data is not being managed by something like elastic cache or say, DynamoDB. When the elastic load balancer detects unhealthy EC2 instances, it no longer routes traffic to those unhealthy instances. If all of your EC2 instances in a particular availability zone are unhealthy but you have set up EC2 instances in multiple availability zones, elastic load balancing will route traffic to your healthy EC2 instances in those other zones. Another benefit of load balancers is that they can manage your secure socket layer connections. The classic in application load balancers improve availability and durability by allowing you to SS offload. So, the elastic load balancer supports SSL termination including the offloading the SSL decryption. The management of the SSL certificate which you can do from inside your load balancer, using Amazon Certificate Manager and the encryption of backend instances using an optional public key authentication should you require it. So, ELBs basically as a family support HHTP, HTTPS, TCP and SSL and the HTTPS request uses the SSL protocol to establish secure connections over the HTTP layer. Now, you can also use the SSL protocol to establish secure connections over the TCP layer with the classic and network load balancers. If the front end connection uses TCP or SSL, then your backend connections can use either TCP and SSL as well. If the front end connection uses HTTP or HTTPS, then your backend connections can use either HTTP or HTTPS. There are currently three types of elastic load balancer.
The classic load balancer, the network load balancer and the application load balancer. So, what is the difference I hear you ask? The network load balancer is designed for connection-based load balancing. So, until now, it's quite new, until now we had anticipated extremely spiky workloads or even instantaneous failover between regions required. You'd ask AWS to provision a load balancer in preparation for the spike or surge in traffic, so this meant the load balancer was pre-warmed for you by AWS which required a few steps like logging a support ticket etc. etc. So, the network load balancer reduces some of these dependencies. The network load balancer has been designed to handle sudden and volatile traffic patterns, so it's ideal for load balancing TCP traffic. It is capable of handling millions of requests per second while maintaining low latencies and without the need to be pre-warmed before traffic arrives. With the network load balancer we have a simple load balancing service specifically designed to handle unpredictable burst TCP traffic. The network load balancer makes available a single, static IP address per availability zone and operates at the connection level which is layer four routing inbound connections to AWS targets. Now, those targets can be EC2 instances, containers or an IP address and the network load balancer is tightly integrated with other AWS-managed services such as auto scaling, ECS which is Amazon Elastic Container Service and CloudFormation. It also supports static and elastic IP addresses and load balancing to multiple ports on the same instance, big tick. So, the best use cases for the network load balancer are when you need to seamlessly support spiky or high-volume inbound TCP requests, when you need to support a static or elastic IP address and if you are using container services and/or want to support more than one port on an EC2 instance. The application load balancer is arguably the most protocol-oriented load balancing service because the service enforces the latest SSL/TLS ciphers and protocols. It is ideal for negotiating HTTP and HTTP requests.
The application load balancer also operates at the request level which is layer seven but provides more advanced routing capabilities than the classic and network load balancers. Additionally, it's support for host-based and path-based routing, X forwarded or for headers for example or server name indications which is SNIs and sticky sessions makes the application load balancer ideal for balancing loads to microservices and container-based applications. Another good reason why it's a great choice for containers, the application load balancer enables load balancing across multiple ports on a single Amazon EC2 instance. So, this is really powerful when you are using ECS which is the Elastic Container Service as you can specify a dynamic port in the ECS task definition. So, this creates an unused port on the container when an EC2 instance is scheduled and the ECS scheduler automatically adds the task to the load balancer using this prot which is one less thing for you to worry about. The best use case for the application load balancer containerized applications, microservices when you don't need to support a static or elastic IP address, if you do, then you would want to use the network load balancer for that. Now, that brings us to our third choice, the classic load balancer, our old friend. It is still a great solution and if you just need a simple load balancer with multiple protocol support, the classic load balancer is perfect. It supports many of the same layer four and layer seven features as the application load balancer, sticky sessions, IP V6 support, monitoring, logging and SSL termination and both the classic and application load balancers support offloading SSL decryption from application instances, the management of SSL certificates and encryption to backend instances with the option of public key authentication. So, one plus with the classic load balancer is that it permits flexible cipher support which allows you to control the ciphers and protocols the load balancer presents to clients. So, this makes the classic load balancer a great choice if you have to use or are limited to use a specific cipher. Best use cases for the classic load balancer? Simple load balancing or flexible cipher support.
So, do all these cost the same I hear you mutter? Costs do vary per region, so always check the AWS pricing page before using or changing a load balancer. Currently all three load balancers attract a charge for each hour or partial hour the load balancer is running but both the application and the network load balancers both also incur an additional charge for the number of load balancer capacity units or LCUs used per hour. Now, this cost is very well explained on the AWS load balancer pricing page, not something you need to really be an expert in for the Solution Architect Associate exam. So, the classic load balancer is as it sounds classic and simple has just a simple charge for each gigabyte of traffic transferred through the load balancer. So, each load balancing use case is gonna be unique but here are a few rules of thumb I like to use when considering which one to choose. If you need to support a static or elastic IP address, use the network load balancer. If you need control over the SSL cipher, use the classic load balancer. If using container services and specifically Amazon Elastic Container Service, use the application load balancer or the network load balancer and if you need to support SLL offloading, use the application load balancer or the classic load balancer.
Okay, let's take a look at this sample question shall we? So, the question is your web application front end consists of multiple EC2 instances behind an elastic load balancer. You configured your elastic load balancer to perform health checks on these EC2 instances. Lots of keywords in there already, aren't there? If an instance fails to pass health checks, which statement will be true? So, the keywords we've got in this question are application front end, multiple EC2 instances, and it's behind an elastic load balancer. Better still we're told that we've configured the elastic load balancer to perform health checks on those EC2 instances. Now, if an instance fails to pass a health check, which statement will true? First option, the instance is replaced automatically by the ELB. Now, if we think back to when we were talking about health checking, remember we talked about how elastic load balances detect the health of an instance but it's the actual auto scaling group that will add or remove any instances. So, if an elastic load balancer does a health check and determines that the instance isn't healthy, then it simply reroutes traffic to another healthy instance. So, it doesn't do any replacing. That's done by the auto scaling group. So, we discount option A. Option B, the instance gets terminated automatically by the elastic load balancer. Once again, it's outside the realm of what the elastic load balancer does. It really does simply check for healthy instances and route traffic to those that are sending back healthy signals. So, it's not responsible for stopping or starting or terminating, again, that's the role of the auto scale group and the auto scale launch config which has all of the parameters for what that machine will be and format it will be if it does get started by the auto scale group. So, we'll discount option B too. Option C. The elastic load balancer stops sending traffic to the instance that failed its health check. Now, that so for sounds like the closest option we have to what we think elastic load balancer's role is. It's failed a health check and therefore it's just simply going to stop sending traffic to that instance. So, we'll earmark that one as a possible answer, a correct answer here. Let's look at option D. The instance gets quarantined by the elastic load balancer for root cause analysis. Well, this is quite an interesting option. It would be fantastic if we had a service that could do that. First of all, the idea that we could quarantine instances, that would be interesting. I'm not sure how we would determine if it was quarantined or not, what the level of performance requirement would be and how long it would stay under quarantine for. And root cause analysis is something that doesn't generally get done by software and I don't think it's something that could be realistically done by an elastic load balancer. So, once again, it comes back to what is the elastic load balancer's job?
It is simply to detect which instances in your group are healthy and then route traffic to the healthy one, so that while it's a nice, aspirational idea to have root cause analysis done by elastic load balancers, I wouldn't want to choose that as an option. So, I think our correct option for this sample question is option C. Okay, that concludes our elastic load balancing lecture.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.