Reviewing Azure Load Balancers
Azure Load Balancer allows you to direct incoming traffic to multiple resources such as virtual machines. Load balancing is a strategy often used to direct incoming internet traffic to more than one virtual machine serving the same web application, among many other uses. This allows companies to deploy highly scalable, high-availability solutions, since they can scale the number of targets behind a load balancer up and down as much as needed and still direct internet traffic to a single load balancer. In this Lab Step, you will navigate to an existing Azure Load Balancer and learn more about its fundamental principles.
1. On the dashboard of the Azure Portal, click the portal menu > All resources:
2. On the All resources modal, click caLabsLB:
This will bring you to the Overview blade for the caLabsLB load balancer.
3. On the Overview blade, notice a couple of things:
- Similar to the VNet you reviewed, this load balancer is deployed under a Resource group and Subscription.
- The load balancer has a Public IP address. This is the IP address that the load balancer will accept incoming traffic on, and then redirect the traffic to a backend pool. Notice the Backend pool on the Overview blade; you will review this in more detail shortly.
4. In the menu to the left, click Load balancing rules.
5. On the Rules blade, click LBRule:
You will be directed to the information blade for the LBRule load balancing rule:
There are a few fundamental things to be aware of on this blade:
- The rule has a Port and a Backend port value. The Port value is the port the load balancer will listen for traffic on. In this case, the port is 80, meaning that the load balancer will listen for HTTP traffic (traffic on port 80). A backend port is a port that can optionally be different from the port, in case you want to accept traffic from one port on the load balancer and direct it to a different port on your targets (such as virtual machines).
- The rule has a Health probe set to tcpProbe (TCP:80). This means that the load balancer will occasionally send test traffic to the targets in its backend pool to ensure the health of the targets. In this case, TCP:80 means that the load balancer will send test TCP data over port 80 to its targets.
- The rule has a Backend pool set to BackendPool1. This is the backend pool that will accept traffic from the load balancer. A backend pool is a target for load balancers to direct their incoming traffic to, and contain at least one target, such as a web server on a virtual machine.
6. At the top of the page, click caLabsLB - Load balancing rules to return to the menu:
7. In the menu, click Backend pools:
8. Click the arrow to the left of Backendpool1 to expand it:
Notice that there are two virtual machines in the backend pool, each attached to one of the two network interfaces you reviewed earlier. The network interfaces will allow the VMs to accept and send traffic. Because there are multiple VMs in the backend pool, the load balancer will alternate traffic between the two VMs.
In this Lab Step, you navigated to an existing Azure Load balancer. You also learned about some of the fundamentals of load balancing in Azure, including what load balancing rules are, what backend pools are, and how load balancers direct traffic to one or more targets.