Azure Front Door
Web Application Firewall
The course is part of these learning pathsSee 6 more
This course will provide you with a foundational understanding of the different ways you can load balance traffic in Microsoft Azure. It includes guided walk-throughs from the Azure platform to give you a practical understanding of how to implement load balancing in your Azure environments.
We start by introducing the different types of load balancers, their components, and their use cases. You'll learn how to deploy a load balancer on Azure. Then we'll dive into Application Gateway and you'll learn about its features and components. You'll also learn about Azure Front Door and how to create a Front Door instance.
We'll then take a look at Web Application Firewall, when it's used, and how to use it in conjunction with Application Gateway, Azure Front Door, and Azure CDN. Finally, you'll learn about Traffic Manager, how it works, and when to use it, as well as how to create a Traffic Manager profile.
- Get a solid understanding of load balancing on Azure
- Deploy a load balancer
- Understand the features and components of Application Gateway and how to deploy it
- Learn about Azure Front Door and how to create a Front Door instance
- Learn about Web Application Firewall and how to deploy it on Application Gateway
- Learn how to use Traffic Manager and how to create a Traffic Manager profile
This course is intended for those who wish to learn about the different ways of performing load balancing in Azure.
To get the most out of this course, you should have a basic understanding of the Azure platform.
Welcome to Load Balancer Components. Now that you know what a load balancer is and what it’s used for, let’s talk a little bit about the components that make up an Azure load balancer.
There are actually several pieces that make up a load balancer. You have the Frontend IP Configuration, the Backend Pool, Health Probes, and Load Balancing Rules. You also have High Availability Ports, Inbound NAT Rules, and Outbound Rules.
So, lets touch on each of these, starting with the Frontend IP Configuration.
The Frontend IP of a load balancer is the point of contact for clients. It can be a private IP address or a public IP address, depending on the type of load balancer. When someone needs to access an application that is load balanced, that person would access it through the Frontend IP. Load balancers can even have multiple Frontend IP addresses assigned to them.
The Backend Pool is really just a collection of VMs or VM instances within a scale set that is configured to service the incoming requests to a load balancer. When a request for an application comes in on the Frontend IP, the load balancer sends the request to the backend pool. The load balancer will even automatically reconfigure itself whenever you add or remove instances from the backend pool. This ensures that the load balancer never sends traffic to an instance that has been removed.
Health probes determine the status of the instances that are configured in the backend pool. They determine whether or not a specific instance is healthy and if it can receive traffic. When a health probe that you configure during load balancer set up stops responding, the load balancer will stop sending connections to the unhealthy instance.
Load balancing rules determine how inbound traffic gets distributed across the backend pool instances. A typical configuration for a load balanced Web server would include a load-balancing rule for port 80 traffic, or HTTP, that routes traffic from the front-end IP back to port 80 on the backend instances.
The picture on your screen depicts such a rule.
When you configure a load balancer rule with 'protocol - all and port - 0', what you are doing is configuring high-availability ports. What this rule does is allow you to use a single rule to load balance all TCP flows and UDP flows that hit all ports of an internal standard load balancer. You would typically leverage this feature if you need to load balance a large number of ports.
Inbound NAT rules are used to forward inbound traffic with a specific front-end IP address and port combination. Such traffic is sent to a specific VM or to a specific instance within the backend pool. A typical use case for inbound NAT rules would be one where you wish to allow RDP connections to multiple different VM’s behind a load balancer. Configuring NAT rules for your virtual machines allows you to connect to your VM’s over RDP without the need for jump box or a public IP for each VM.
And lastly, outbound rules are used to configure outbound network address translation, or NAT, for all VM’s within the backend pool. This kind of rule allows you to provide outbound communication to the Internet for your instances within the backend pool. I should mention, however, that outbound rules are only supported on the standard load balancer. They are not supported by the basic load balancer.
Join me in the next lesson, where I will show you how to perform a basic load balancer deployment.
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.