Designing for high availability, fault tolerance and cost efficiency
AWS Services That Enable High Availability
Knowledge Check Point
The AWS exam guide outlines that 60% of the Solutions Architect–Associate exam questions could be on the topic of designing highly-available, fault-tolerant, cost-efficient, scalable systems. This course teaches you to recognize and explain the core architecture principles of high availability, fault tolerance, and cost optimization. We then step through the core AWS components that can enable highly available solutions when used together so you can recognize and explain how to design and monitor highly available, cost efficient, fault tolerant, scalable systems.
- Identify and recognize cloud architecture considerations such as functional components and effective designs
- Define best practices for planning, designing, and monitoring in the cloud
- Develop to client specifications, including pricing and cost
- Evaluate architectural trade-off decisions when building for the cloud
- Apply best practices for elasticity and scalability concepts to your builds
- Integrate with existing development environments
This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.
Basic knowledge of core AWS functionality. If you haven't already completed it, we recommend our Fundamentals of AWS Learning Path. We also recommend completing the other courses, quizzes, and labs in the Solutions Architect–Associate for AWS certification learning path.
This Course Includes:
- 11 video lectures
- Detailed overview of the AWS services that enable high availability, cost efficiency, fault tolerance, and scalability
- A focus on designing systems in preparation for the certification exam
What You'll Learn
|Lecture Group||What you'll learn|
Designing for High availability, fault tolerance and cost efficiency
Designing for business continuity
How to combine AWS services together to create highly available, cost efficient, fault tolerant systems.
How to recognize and explain Recovery Time Objective and Recovery Point Objectives, and how to recognize and implement AWS solution designs to meet common RTO/RPO objectives
|Ten AWS Services That Enable High Availability||Regions and Availability Zones, VPCs, ELB, SQS, EC2, Route53, EIP, CloudWatch, and Auto Scaling|
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
- [Instructor] Okay, Cloud Academy Ninjas, let's review the elastic load balancer. Let's remind ourselves of what they do first. They take request and distribute traffic across your AWS resources. Now that's usually EC2 instances. So a load balancer sends a request to each registered instance on our designated protocol port in our path, and every health check interval, which is generally in seconds. The elastic load balancer waits for the instance to respond within the response time out period. And if the health check exceeds the unhealthy threshold for consecutive failed responses, then the load balancer takes the instance out of service. It does not however terminate the instance unit. That is done by our auto scale group. If the health check exceeds the healthy threshold for a specified number of consecutive successful responses, then the load balancer puts the instance back in service. A load balancer only sends requests to healthy EC2 instances, and it stops routing requests to the unhealthy instances. There are currently three types of elastic load balancer. The classic load balancer, network load balancer, and the application load balancer. So what's the difference? I hear you ask. Well, the AWS application load balancer operates at Layer 7 of the OSI model, and the Classic ELB operates at Layer 4 of the OSI model. So what does that mean? I hear you ask. Well, at Layer 7, the application ELB inspects application level content, not just the IP and port, okay? So the application ELB let's you set more advanced rules and checks. Now, the application ELB is still quite new. In my view, it's unlikely to pop up as a direct question in your cert exams. However, let's just learn a bit more about it, so that you know everything there is to know about this important topic. So a few important features that are supported by both the application and classic ELBs. At present, you can run IP6 and IP4 support on both the classic and the application ELBs, but only the application ELBs will support an IP6, a native IP6 address, in the VPC. One FYI to keep in mind here is that with your IPv6 address, you need an AAAA name record, whereas with your IPv4 address, you just need an A name record. Both load balancer supports sticky session, which is a fantastic way of making sure that requests from a client can be routed to a direct target. It does this using load balancer generated cookies, and if you enable sticky sessions, the same target can use that cookie to recover some session context. So you do need to enable cookies for sticky sessions to work. Health checks, always support important, and this is where the application load balancer provides just a little bit more detail over our classic version. They're both load balancers, route traffic to healthy targets. But with the application load balancer, you get improved insight into the health of your application in two ways. First, the health check with the application load balancer allows you to configure detailed error codes from 200 to 399. So your health checks allow you to monitor the health of each of your services behind your load balancer. And secondly, with the application load balancer, we get insight into traffic for each of the services running on an EC 2 instance. So we do get more granularity, right? That's one of the benefit of the application load balancer. Another is connection draining. So connection draining ensures graceful handling of connections during scale in or out activities. And remember that the default connection draining timeout is 300 seconds, which is five minutes. So when connection draining is enabled, the ELB will stop sending requests to a deregistered or unhealthy instance, and it will attempt to complete any in-flight requests, until a connection draining timeout period is reached, which is not set to be anything other the default of 300 seconds, five minutes. Okay, the application load balance also provides support for containers. So you can configure your ALB to balance request across multiple ports on a single EC2 instance, which is a big difference over the classic load balancer. ECS or elastic container service allows you to specify a dynamic port in the container service task definition. So you can specify a unique port for using with you EC2 instance. And the ECS, the container service, will schedule an automatic exit task to the ELB using this port. You can also do content based routing, so if your application is made up of individual services the application load balancer can route a request to a service based on the content of that request and it's got HTP2 support, it's the latest version of the Hyper Text Transfer protocol and it uses a single multiplex connection which allows multiple requests to be sent to the same connection. So that speeds up your connections and your page download times. And another benefit of HTP2 is that it compresses the hidden data before sending it out in a binary format. So that speeds up the display times of complex file sheets and pages. It's very very beneficial. You also get web socket support which allows a server to exchange real time messages with end users without the end user having to pull a server for an update. There are a few security features which come with the application load balancer. So, if you're creating instances within the BPC, you can manage security groups associated with your elastic load balancer which provides additional network and security options. So with both classic and application load balancers you can configure an elastic load balancer to be internet facing or create a load balancer without public IP addresses to serve as an internal or non internet facing load balancer. With the application load balancer you get the level seven load balancing, which is a benefit. And you can load balance HTTP and HTTPS applications using that layer seven specific feature such as it's forwarded for hit or requests which can give you a bit more granularity on how your HTTPS sessions are handled. All sessions to load balancers provide HTTPS support, and of course you get the benefit of being able to terminate SSL connections on the load balancer which adds another layer of security for you. Or you can pass through those HTTPS requests to your back end instance. You need an X509 certificate for HTTPS or SSL connections, and the HTTPS listener does not support client side SSL certificates. If you have configured an HTPS listener on an ELB without a security policy defined for negotiating the SSL connection between the ELB and the client, then the ELB will select the latest version of a security policy for you. So, if we create an ELB with three instances, ELB will create two security groups by default. Once you allow inbound and outbound requests to the ELB listener and the health check port and one for the instances to allow inbound requests from the ELB. An ELB SSL security policy definition requires the SSL protocols, the SSL ciphers, the security order preference, it does not require the client order preference. Remember ELB does not support TLS 1.3. If you do by accident or on purpose delete an elastic load balancer, any of the EC2 instances registered to it will keep running. Listeners are very important for load balancers. Our listener is a process that checks for connection requests. And each load balancer must have one or more listeners configured. So every listener is configured with a protocol and port for a front end connection and a protocol and a port for a back end connection which could be your EC2 instance or your LAMBDA service or whatever. Now remember ELB support HTTP, HTTPS, TCP, and SSL. The setting options we have important to remember and do pop up as questions from time to time. So the options we have are the idle connection timeout, whether we wanna have cross zone load balancing support, whether we wish to use connection draining, what our proxy protocol is if we wish to use one, and if we want to support sticky sessions. Now health checks are important, right, and that's not something we can do without. So elastic load balancer supports health checks to test the status of your back end application or service. So imagine if we had created an elastic load balancing load balancer listening on port 80 and you registered it with a single EC2 instance, also listening on port 80, so when a client makes a request to that load balancer, the load balancer will maintain two connections, not one, two connections. One to the client and one to the EC2 instance. Just a quick note on ELB prewarming. So ELBs are a managed service. So the provisioning and sizing of the ELB is done for you by AWS. Now ELBs are very very well run and for 99% of use cases we'll scale to meet any spike in demand. But if you are anticipating a large spike in traffic or heavy burst activity, one of the benefits of the network load balancer is that it has been designed to deal with spiky and unpredictable TCP traffic. So if you have a site that requires that type of scalability, then look at using the network load balancer. If you do still use a classic or an application load balancer, then it's best practice to prewarn AWS support and request that they prewarn the load balancer for you. The best case for that is if you can provide your expected start and end dates for the traffic spike, and ideally the expected request rate per seconds, and ideally the total size of the requests and responses. Okay, so that brings us to a close on elastic load balancing. Let's get into the next section.
About the Author
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.