image
DEMO: Deploying a Load Balancer
Start course
Difficulty
Intermediate
Duration
1h 23m
Students
5774
Ratings
4.8/5
starstarstarstarstar-half
Description

This course will provide you with a foundational understanding of the different ways you can load balance traffic in Microsoft Azure. It includes guided walk-throughs from the Azure platform to give you a practical understanding of how to implement load balancing in your Azure environments.

We start by introducing the different types of load balancers, their components, and their use cases. You'll learn how to deploy a load balancer on Azure. Then we'll dive into Application Gateway and you'll learn about its features and components. You'll also learn about Azure Front Door and how to create a Front Door instance.

We'll then take a look at Web Application Firewall, when it's used, and how to use it in conjunction with Application Gateway, Azure Front Door, and Azure CDN. Finally, you'll learn about Traffic Manager, how it works, and when to use it, as well as how to create a Traffic Manager profile.

Learning Objectives

  • Get a solid understanding of load balancing on Azure
  • Deploy a load balancer
  • Understand the features and components of Application Gateway and how to deploy it
  • Learn about Azure Front Door and how to create a Front Door instance
  • Learn about Web Application Firewall and how to deploy it on Application Gateway
  • Learn how to use Traffic Manager and how to create a Traffic Manager profile

Intended Audience

This course is intended for those who wish to learn about the different ways of performing load balancing in Azure.

Prerequisites

To get the most out of this course, you should have a basic understanding of the Azure platform.

Transcript

Welcome back, in this demonstration here, I want to show you how to deploy a basic load balancer. What this will be is a load balancer that load balances two VMs that I have already deployed. On the screen here, you can see I'm logged into my Azure portal. And what we have here is a VM called VM1 and a VM2. These VMs are both deployed into an availability set called AvailSet.

Now what I've done on each of these VMs is install IIS Web Services since they are both Windows 2019 servers. So what we'll do is deploy a basic load balancer, and we'll load balance port 80 or HTTP across both of these VMs. So to get started, what we'll do here is we'll create a new resource. And to do that we'll select the hamburger here, and we'll click Create a resource. And then what we'll do is we'll search the marketplace for load balancer. We'll go ahead and select Load Balancer from the list. And then from here, we can read a little bit about the load balancer itself, how it works, what it offers, all that fun stuff. And then what we can do is click Create here.

Now what we're going to do for this load balancer is deploy into the Lab Subscription and then into the LBLabs resource group that I've set up. This LBLabs resource group contains the VMs that are part of this lab. I like to keep things together.

Now as you can see here, we have quite a bit of information we need to provide here. What we'll do is call our load balancer Myloadbalancer. Then what we'll do is we'll deploy into East US which is where my other resources are located. And we can see here the type offers us either an internal type or a public type. If we select Internal, we'll be deploying an internal load balancer that is not exposed to the Internet. Public exposes it to the Internet, so we'll leave this at Public. And like I mentioned earlier, we're going to deploy a basic load balancer.

Now since the load balancer is public, we need to assign a public IP address for it. We don't have an existing public IP that I'm going to use, so we're going to select the option here to create a new one. And what I'll do is I'll call the public IP address myloadbalancerIP.

Now the SKU for the public IP address is going to match the SKU of the load balancer itself. And then the option here for assignment is whether or not we want to assign a static public IP to our load balancer or simply just use a dynamic one. For this exercise, we'll use a dynamic IP address. And what that means is that can change over time. But normally in a production environment, you'd be accessing it through DNS anyway. So the IP address changing is typically not that big of a deal. And for this exercise, we do not need a public IPv6 address.

We're not going to do any tagging for this particular load balancer, so we can just click Review and create here. And what it does here, it validates our configuration, makes sure everything that we've provided matches up with what is required for that particular load balancer. We can see we get the green validation passed check mark, so we can go ahead and create the load balancer.

Now the creation of the load balancer is just the first part of what we need to do here. We'll go ahead and Go to resource here. And we can see that our load balancer has now been deployed. Now like I was saying, the creation of the load balancer itself is just the first piece of what we need to do here. We need to create a frontend IP configuration. We need to configure backend pools. We need to configure health probes, load balancing rules, and we need to make sure that we can load balance those two VMs properly.

Now since I've already gone ahead and deployed the VMs that I'm going to load balance, that's one task that we don't need to worry about. So the first task we're going to complete here for our load balancer is to create the backend pool that's going to host the two VMs that we're going to load balance.

So to create that backend pool, we simply select Backend pools here. We can see we have none defined, so we'll go ahead and add one. And we'll give it a name, and we'll just call it BackendPool. And then what we need to do is select a virtual network that this backend pool will run from. If we select the dropdown here, we can see we have our LBLabs vnet which is where our VMs are. So we'll go ahead and select him. And again we're using IPv4. And if we hover over Associated to here, what we need to do here is associate the backend pool to one or more VMs.

Now we could also associate it with instances within a VM scale set. But for this demonstration, we're just using two basic virtual machines. So we'll select the dropdown here. We'll tell it we're going to associate with virtual machines. And then when we do that, it's going to ask us what virtual machines we want to associate with.

Now you'll notice that we can only attach virtual machines in East US, and that's because that's where our load balancers being deployed. It also tells us here that the VMs need to have a basic SKU public IP configuration or no public IP configuration at all. So we'll go ahead and add our virtual machines here. We have VM1 and VM2. And we'll add them in.

Now that we have a name for our pool, an associated virtual network, and it's associated with our backend VMs, we'll go ahead and click Add. And what this is doing is adding the backend pool to the load balancer. So we can see our backend pool is now part of our load balancer. With our backend pool configured, we can go ahead and create the health probe that I talked about in the previous lesson. This health probe is going to allow the load balancer to monitor the status of the backend VMs.

So let's go ahead and add a health probe down here. Go ahead and click Add. And we'll just call this again MyHealthProbe. Since we are going to load balance HTTP traffic or port 80 traffic, we will select the dropdown for Protocol and select HTTP. We can leave the path at the default forward slash. And basically what that means is that's the routes of the URI for that requesting health status from that backend point. So it's just going to hit the root of the website on those VMs.

Now this interval here, this is the amount of time between the probe attempts. So it's gonna go back every so many seconds to check the status of each of the backend endpoints. We'll change this to 15 seconds. The unhealthy threshold is the number of times that it senses a failure before it actually raises an actual unhealthy failure. This basically tells the load balancer hey, you need to detect two probe failures consecutively before you label a VM as unhealthy. So we'll leave this default two here, and we'll go ahead and click okay.

Now what this does is create the health probe. So now we have the backend pool that contains our two VMs which are running IIS. And then we have the health probe that's going to go back and check the root path for HTTP over port 80 on each of those VMs. If one of them fails twice in a row, the load balancer will mark that particular instance as failed and not send any traffic to it.

Now speaking of sending traffic, we now need to create the load balancing rule. And this load balancing rule will tell the load balancer where traffic goes and why and when. So we'll go ahead and select a load balancing rule here, and we'll click Add. We'll give our rule a name here. And again we're working with IPv4. Now this frontend IP address, we can only select the front IP address that has a public IP.

Now this frontend IP address is the frontend of the load balancer. If we select the dropdown here, we only have the load balancer frontend. Since HTTP comes over TCP port 80, we'll leave the protocol set to TCP and the port as 80. The backend port is also 80. So basically what we're doing here is we're routing traffic hitting the frontend on 80 and then sending it to the backend port on 80.

So we're not doing any kind of translations or anything. We can see we already have the backend pool filled in here for this dropdown here. And the health probe is already there by default. Now if we hover over Session Persistence here, we can see in this black box what Azure is telling us is that we can configure what is called Session Persistence. And this means that we can ensure that traffic from a specific client gets handled by the same virtual machine in the backend pool for the duration of that specific session.

Now if we select the dropdown, we can base it on client IP, client IP and protocol, or have no session persistence. We're not interested in creating any kind of session persistence because you don't need to for this demonstration. But if you have a situation where you need to ensure that a client session remains on a specific VM throughout that session, you would configure your Session Persistence here. So we'll leave this set to none. And if we hover over Idle Timeout here, we can see that this Idle Timeout allows us to configure how to keep the session open without relying on the client to send keep alive messages. We'll leave the default here at four minutes.

Now if we hover over this Floating IP direct server return, we can see here that Microsoft recommends using this only when you're using SQL AlwaysOn Availability Group Listener and SQL Failover Clustered Instance IP Address. We're not using SQL AlwaysOn here or any kind of failover, so we'll leave this disabled. And then from here, we'll click okay.

Now at this point, we have the rule saved. And if we click on Frontend IP configuration, we can see we have the load balancer frontend. And it's using the myloadbalancerIP. Now what I'll do here is go out to my LBLabs here, and we'll start up VM1 and VM2 here. We'll let these VMs spin up for a second. And then what we'll do is we'll go out and we'll test our load balancer, and see if we can hit IIS on our load balancer.

Now to get the IP address for our load balancer, we can go ahead and select myloadbalancerIP. This is the public IP address that we created earlier. And we can see we have an IP address here of 52.149.165.55. So we'll copy this. Let's open up an incognito window. And we'll paste and go. And we can see that VM1 has responded.

Now this VM1 here is showing up because I made a custom default HTML file on our VM1 IIS deployment. Now if we open up another tab, let's see if I can get this to work on both. We can see VM1 shows up again. If we refresh, it's gonna keep hitting whatever makes the most sense at the time. Try one more here. And the reason this is hitting VM one first is because there's no real load on the load balancer to be distributing across VMs. So it's just going to hit whatever makes sense.

Now if I do this, if I go back out to LBLabs and I stop VM1, we'll give this a moment to stop. And then what we'll do is we'll try to hit the load balancer again. And you'll see that the load balancer then recognizes that VM1 is not there and sends us over to VM2. Let's go ahead and open an incognito window. And we can see here that the load balancer sent me over the VM2. And that's because the health rule noticed that the VM1 server has gone offline. So the load balancer is smart enough to look at that and say okay, we know VM1 is down.

So let's go over to VM2. And that's pretty much it. So let's minimize this here, and we'll go back out to my load balancer. And we can see we have our frontend with an IP address that's public. We can see the backend pool that points to our two VMs. We can see we have our health probe that is looking for port 80. So it's going out to each of those VMs every 15 seconds and checking to see if port 80 is responding. If it doesn't twice in a row, it marks that resource as down so it won't send any traffic to it. And then we have the load balancing rules.

So with that, you now know how to deploy a basic load balancer and how to load balance a basic IIS deployment across two virtual machines.

About the Author
Students
82408
Courses
86
Learning Paths
63

Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.

In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.

In his spare time, Tom enjoys camping, fishing, and playing poker.