These study aids will help refresh your knowledge of the core concepts covered in the Solutions Architect Associate learning path.
Run the 30min primer video before you go in to sit your exam.
The revision cards are included in the learning path items.
Updates
09/01/2020 - Updated Exam Primer lecture
- [Instructor] Okay, Cloud Academy Ninjas, let's review the elastic load balancer. Let's remind ourselves of what they do first. They take request and distribute traffic across your AWS resources. Now that's usually EC2 instances. So a load balancer sends a request to each registered instance on our designated protocol port in our path, and every health check interval, which is generally in seconds. The elastic load balancer waits for the instance to respond within the response time out period. And if the health check exceeds the unhealthy threshold for consecutive failed responses, then the load balancer takes the instance out of service. It does not however terminate the instance unit. That is done by our auto scale group. If the health check exceeds the healthy threshold for a specified number of consecutive successful responses, then the load balancer puts the instance back in service. A load balancer only sends requests to healthy EC2 instances, and it stops routing requests to the unhealthy instances. There are currently three types of elastic load balancer. The classic load balancer, network load balancer, and the application load balancer. So what's the difference? I hear you ask. Well, the AWS application load balancer operates at Layer 7 of the OSI model, and the Classic ELB operates at Layer 4 of the OSI model. So what does that mean? I hear you ask. Well, at Layer 7, the application ELB inspects application level content, not just the IP and port, okay? So the application ELB let's you set more advanced rules and checks. Now, the application ELB is still quite new. In my view, it's unlikely to pop up as a direct question in your cert exams. However, let's just learn a bit more about it, so that you know everything there is to know about this important topic. So a few important features that are supported by both the application and classic ELBs. At present, you can run IP6 and IP4 support on both the classic and the application ELBs, but only the application ELBs will support an IP6, a native IP6 address, in the VPC. One FYI to keep in mind here is that with your IPv6 address, you need an AAAA name record, whereas with your IPv4 address, you just need an A name record. Both load balancer supports sticky session, which is a fantastic way of making sure that requests from a client can be routed to a direct target. It does this using load balancer generated cookies, and if you enable sticky sessions, the same target can use that cookie to recover some session context. So you do need to enable cookies for sticky sessions to work. Health checks, always support important, and this is where the application load balancer provides just a little bit more detail over our classic version. They're both load balancers, route traffic to healthy targets. But with the application load balancer, you get improved insight into the health of your application in two ways. First, the health check with the application load balancer allows you to configure detailed error codes from 200 to 399. So your health checks allow you to monitor the health of each of your services behind your load balancer. And secondly, with the application load balancer, we get insight into traffic for each of the services running on an EC 2 instance. So we do get more granularity, right? That's one of the benefit of the application load balancer. Another is connection draining. So connection draining ensures graceful handling of connections during scale in or out activities. And remember that the default connection draining timeout is 300 seconds, which is five minutes. So when connection draining is enabled, the ELB will stop sending requests to a deregistered or unhealthy instance, and it will attempt to complete any in-flight requests, until a connection draining timeout period is reached, which is not set to be anything other the default of 300 seconds, five minutes. Okay, the application load balance also provides support for containers. So you can configure your ALB to balance request across multiple ports on a single EC2 instance, which is a big difference over the classic load balancer. ECS or elastic container service allows you to specify a dynamic port in the container service task definition. So you can specify a unique port for using with you EC2 instance. And the ECS, the container service, will schedule an automatic exit task to the ELB using this port. You can also do content based routing, so if your application is made up of individual services the application load balancer can route a request to a service based on the content of that request and it's got HTP2 support, it's the latest version of the Hyper Text Transfer protocol and it uses a single multiplex connection which allows multiple requests to be sent to the same connection. So that speeds up your connections and your page download times. And another benefit of HTP2 is that it compresses the hidden data before sending it out in a binary format. So that speeds up the display times of complex file sheets and pages. It's very very beneficial. You also get web socket support which allows a server to exchange real time messages with end users without the end user having to pull a server for an update. There are a few security features which come with the application load balancer. So, if you're creating instances within the BPC, you can manage security groups associated with your elastic load balancer which provides additional network and security options. So with both classic and application load balancers you can configure an elastic load balancer to be internet facing or create a load balancer without public IP addresses to serve as an internal or non internet facing load balancer. With the application load balancer you get the level seven load balancing, which is a benefit. And you can load balance HTTP and HTTPS applications using that layer seven specific feature such as it's forwarded for hit or requests which can give you a bit more granularity on how your HTTPS sessions are handled. All sessions to load balancers provide HTTPS support, and of course you get the benefit of being able to terminate SSL connections on the load balancer which adds another layer of security for you. Or you can pass through those HTTPS requests to your back end instance. You need an X509 certificate for HTTPS or SSL connections, and the HTTPS listener does not support client side SSL certificates. If you have configured an HTPS listener on an ELB without a security policy defined for negotiating the SSL connection between the ELB and the client, then the ELB will select the latest version of a security policy for you. So, if we create an ELB with three instances, ELB will create two security groups by default. Once you allow inbound and outbound requests to the ELB listener and the health check port and one for the instances to allow inbound requests from the ELB. An ELB SSL security policy definition requires the SSL protocols, the SSL ciphers, the security order preference, it does not require the client order preference. Remember ELB does not support TLS 1.3. If you do by accident or on purpose delete an elastic load balancer, any of the EC2 instances registered to it will keep running. Listeners are very important for load balancers. Our listener is a process that checks for connection requests. And each load balancer must have one or more listeners configured. So every listener is configured with a protocol and port for a front end connection and a protocol and a port for a back end connection which could be your EC2 instance or your LAMBDA service or whatever. Now remember ELB support HTTP, HTTPS, TCP, and SSL. The setting options we have important to remember and do pop up as questions from time to time. So the options we have are the idle connection timeout, whether we wanna have cross zone load balancing support, whether we wish to use connection draining, what our proxy protocol is if we wish to use one, and if we want to support sticky sessions. Now health checks are important, right, and that's not something we can do without. So elastic load balancer supports health checks to test the status of your back end application or service. So imagine if we had created an elastic load balancing load balancer listening on port 80 and you registered it with a single EC2 instance, also listening on port 80, so when a client makes a request to that load balancer, the load balancer will maintain two connections, not one, two connections. One to the client and one to the EC2 instance. Just a quick note on ELB prewarming. So ELBs are a managed service. So the provisioning and sizing of the ELB is done for you by AWS. Now ELBs are very very well run and for 99% of use cases we'll scale to meet any spike in demand. But if you are anticipating a large spike in traffic or heavy burst activity, one of the benefits of the network load balancer is that it has been designed to deal with spiky and unpredictable TCP traffic. So if you have a site that requires that type of scalability, then look at using the network load balancer. If you do still use a classic or an application load balancer, then it's best practice to prewarn AWS support and request that they prewarn the load balancer for you. The best case for that is if you can provide your expected start and end dates for the traffic spike, and ideally the expected request rate per seconds, and ideally the total size of the requests and responses. Okay, so that brings us to a close on elastic load balancing. Let's get into the next section.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.