Getting started with ELB
ELB: practical usage
Load Balancing refers to distributing workloads across multiple computing resources in order to avoid overloading some nodes while leaving others underused. When properly configured, load balancing can greatly increase an infrastructure's availability and performance, optimize throughput and response time, and generally improve the system effectiveness.
AWS has a purpose-build load balancing service called Elastic Load Balancing (ELB). Since the effective use of load balancers is so important even to many smaller deployments, instructor David Clinton crafted this introductory course, covering all the main concepts and practical application of ELB.
Who should take this course
As this is a beginner to intermediate course, you should be able to grasp all the core concepts with just about any background level. Nevertheless you may want to take our introductory EC2 and VPC courses first. Also, our Introduction to AWS might be another good, quick tutorial if you haven't yet seen that.
As a follow up to this course, check out our ELB questions set, and our advanced course How to Architect with a Design for Failure Approach, where you'll get the chance to see ELB in action providing high availability and fault tolerance in a cloud architecture.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
In this video, we're going to explore Load Balancing within a VPC, an AWS Virtual Private Cloud. There are cases where an organization might want to make a network service available only to its internal users, as the kind of organization we're talking about might be quite large with potentially thousands of users. For greater reliability and performance, they may also need more than one server for the task.
An AWS VPC-facing Load Balancer can spread internal and AT requests among multiple Instances just like its external internet-facing cousin. To get started, we'll click on EC2. Let's click on Load Balancers. Let's create a new Load Balancer. We'll give it a name, this will be Internal-Balancer. We will use the default VPC, and again it's-, whoops, the default VPC; my finger slipped there.
Again, it's very important you should use the same VPC that's hosting your Instances. This time, we'll click-, we'll select Create an Internal Load Balancer.
And we will listen for HTTP traffic coming on port 80 from the network, not from the internet but from our own network, from our VPC. And we will reroute that to HTTP port 80 of our Instances. Click continue.
We'll allow the Health Check defaults to stand as we did before, we'll click continue again. Again, we'll add subnets from all the Availability Zones within our region. So therefore, Instances that are in any of the Availability Zones in this region will be included in this Load Balancer, continue.
Let's create a new Security Group. We will accept any HTTP traffic from anywhere and that's, again, enough for our purpose.
Click Continue. We will select both Instances that are running and you will remember these were Instances that already have Apache web service installed, and each one has an Index.html file which will identify it when you hit it with a web browser, it'll identify it either as Instance 2A or Instance 2B. And we'll again enable Cross-Zone Load Balancing and Connection Draining, and click continue.
This time we will give it a name, and we'll call it VPC-Balancer, click continue. We'll review and click create, and close.
While we're waiting for the Load Balancer to actually get up and running, let's go to EC2 Instances, and we're gonna create a third Instance. This Instance will be our Client Instance, we're going to try to access data on the two Instances that are load balanced from this Client Instance within the same network.
So, let's launch an Instance, we all start Ubuntu, we'll use the smallest configuration available. We will use the default VPC. Again, very important that the only access that will be possible to the Load Balancer will be from within this default VPC. We'll use the same subnet.
The subnet being used by the other two Instances, in fact, is 172.30.x.x. We'll use that same subnet to make communication even easier.
We will enable the Auto Assignment of a public IP address, and we'll let it-, we'll move on to Storage, we'll take the default for Storage. We will call this Client1, configure the Security Group. It will accept SSH traffic from anywhere, we could actually change that to my IP. But this one is gonna be live for such a short time, I don't think that's a big deal, and we'll review, and then launch. We'll select an existing Key Pair and acknowledge that I have that Key Pair on my computer, which it turns out I do. And we'll wait for that to be created. In the meantime, let's now go back to see how the Load Balancer is doing, is it up and running yet? We can see here, zero of two Instances in service, it's not yet up and running. In the meantime then, let's go back to Instances. Let's click on Client1 which is the new Instance we just created, it's up and running, it seems. So, we should now log in there using this IP address and the Key Pair that I have on my computer.
So, we're now about to log into my new Client Instance, we used SSH. We are identifying the Key Pair called MyKey. And we're logging in as Ubuntu, that's the-, always the default user for an EC2 Instance of Ubuntu. So, Ubuntu at that IP address with Enter. We'll accept the authenticity of the host, and we're in.
Let's just make sure before we do anything with the Load Balancer, that we can actually access the two web server Instances themselves. Let's use curl, actually we probably can't, we probably have to install Curl.
So, sudo apt-get update, then sudo apt-get install curl, it's already installed. Okay, so let's use curl, the IP address 172.30.1.65, which is the internal IP address of that server. You wouldn't be able to access 172.30.1.65 from the internet, but because our Client Instance is on the same VPC as the web server Instance, we can use curl to access it through its internal address.
Let's try again, this time we'll try accessing the second web server, and its IP address is 172.30.1.168. You'll see that the message delivered to us from the first one we tried is "Welcome to Instance 2A." And the contents of the Index.html file on the second server we tried, the second IP address we tried is "Welcome to Instance 2B," but that was by way of accessing directly the Instances. What happens now if we use the load balancing URL? So, previously I copied the address of the Load Balancer from the EC2 dashboard. Now, I'm going to type curl, and I'm going to paste that address and we've accessed Instance 2A. Let's do it again. And this time, we access Instance 2B with the very same address. So, the internal Load Balancer is obviously working exactly the way it should.
About the Author
David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.
Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.
Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.
His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.