Elastic Load Balancing
EC2 Auto Scaling
The course is part of these learning paths
Elastic Load Balancing and EC2 Auto Scaling are widely used features within AWS to help you maintain reliability, availability and reduce costs within your environment. As such, it's fundamental that if you are designing, operating or managing services within AWS you should be familiar with ELB and auto scaling concepts and configuration. This course will explain and show you how to implement both and how they can work together.
By the end of this course you will:
- Understand what an elastic load balancer is and what is used for
- Be aware of the different load balancers available to you in AWS
- Understand how ELBs handle different types of requests, including those that are encrypted
- Be able to identify the different components of ELBs
- Know how to configure ELBs
- Know when and why you might need to configure an SSL/TLS certificate
- Understand what EC2 auto scaling is
- Be able to configure auto scaling launch configurations, launch templates and auto scaling groups
- Explain why you should use ELBs and auto scaling together
This course has been created for:
- Engineers who are responsible for the day to day operations of maintaining and managing workloads across AWS
- Solution Architects who are designing solutions across AWS infrastructure
- Those who are looking to begin their certification journey with either the AWS Cloud Practitioner or one of the 3 Associate level certifications
To get the most from this course then you should be familiar with basic concepts of AWS and be familiar with some of its core components, such as VPC and EC2.
You should also have an understanding of the AWS global infrastructure and the different components used to define it. For more information on this topic, please see our existing blog post here: https://cloudacademy.com/blog/aws-global-infrastructure/.
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Hello and welcome to this lecture focusing on the network load balancer and its configuration.
Between the ALB and the NLB, the principles are the same as to how the overall process works, so to load balance incoming traffic from a source to its configured target groups. However, whereas the ALB work to the application level analyzing the HTTP header to direct the traffic, the network load balancer operates at Layer 4 of the OSI model enabling you to balance requests purely based on the TCP and UDP protocols. As such, a request to open a TCP or UDP connection is established to load balance the host in the target group. The listener supported by the NLB include TCP, TLS and UDP. The NLB is able to process millions of requests per second making the NLB a great choice if you need ultra high performance for your application. Also if your application logic requires a static IP address, then the NLB will need to be your choice of elastic load balancer. Unlike the application load balancer that has cross-zone load balancing always enabled, for the NLB this can either be enabled or disabled. When your NLBs are deployed and associated to different availability zones, an NLB node will be provisioned in these availability zones. The node then uses an algorithm which uses details based on the sequence, the protocol, source port, source IP, destination port and destination IP to select the target in that zone to process the request. When a connection is established with a target host, then that connection will remain open with that target for the duration of the request. Let me now provide a demonstration on how to configure and set up a network load balancer.
As you can see, I'm in the AWS management console. So to create our network load balancer, let's go to EC2 under Compute. Then if we go down the left-hand side again under Load Balancing, click Load Balancers, we can see here our existing application load balancer we created before. So let's click on Create Load Balancer and this time we're going to create a network load balancer. So click on Create. And again, it's very similar configuration to the application load balancer. So let's firstly give it a name. Let's call this DNS-NLB. This time we'll have it internal facing. For our listener, let's select the UDP protocol and the load balancer port is port 53 which is DNS. Again, we can select our availability zones where we want our load balancer to reside. So under eu-west-1a, let me select that subnet. And on the b, that one there. Next, configure security settings. Again, we receive this message because we're not using a secure listener and for this demonstration that's okay. Configure routing, now we need to associate our target group. Let's create a new target group this time and we'll call this DNS. For the target type, I shall leave as instance. We have our port and protocol there. Health checks under TCP. And if you wanted to, you can make any changes to your advanced health check settings there. Next, click on Register Targets. As we can see, we don't have any registered targets as yet. If I scroll down, I can see I have one instance here so I'm going to add that to the registered list of targets. Once that's been added, click on Next Review. Once you're happy with all your configuration settings, click on Create. And there you have it. Your network load balancer is now created. We can see here provisioning which is our network load balancer. This is our previous application load balancer that we created earlier. So it's a very similar process with different ports and protocols available between the load balancers. And that's the end of this demonstration.
About the Author
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data centre and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 50+ courses relating to Cloud, most within the AWS category with a heavy focus on security and compliance
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.