This course covers the Architect ARM Networks part of the 70-534 exam, which is worth 5 - 10% of the exam. The intent of the course is to help fill in an knowledge gaps that you might have, and help to prepare you for the exam.
Welcome back. In this lesson we'll be covering virtual networks. Cloud-based virtual networks are software based, and they provide a standard way to organize and isolate virtual machines running in the cloud.
The virtual network controls addressing, DNS settings, security policies, and routing tables. Virtual networks are commonly referred to as VNets, and they're completely isolated from one another. Due to this isolation, you can create networks for development, testing, and production that use the same site or address blocks. To allow for even further levels of isolation, VNets support subnets.
Subnets allow you to break out VMs by their purpose, and this is common with tiered architectures. As an example, if you have an application broken out into the front end and back end tiers, then you might want to create two subnets, one for the front end VMs, and another for the back end tier.
Although VNets are isolated from each other, the virtual machines inside of VNets, as you'd expect, are not. Virtual machines inside of a VNet can communicate with each other via their private IP address. The private IP address even allows VMs in different subnets to directly communicate.
Let's check out a VNet diagram to better understand how these components interact at a high level before we drill down into them further.
So, the VNet is a software based isolated network in Azure. Inside the VNet you can create subnets that will allow you to break the network up for better organization, or to better represent your application's tiers.
In this example there are two subnets, and each of them has their own network security group. A network security group, commonly called an NSG, serves as an access control list for incoming and outgoing traffic.
In this diagram the network security groups are being applied to the subnet, however they can also be applied directly to a virtual machine instance.
The difference is that, if applied to the subnet, the rules will apply to all of the instances in the subnet, where, if applied to an instance, the rules only apply to that specific instance.
To allow communication between the subnets and from the public internet, a load balancer is used. The Azure load balancer supports internal and external load balancing. The external load balancing will allow you to have a highly available app, because it's going to direct traffic from the public internet to a healthy VM.
And the internal load balancer also directs requests to healthy VMs, however, they also have the added value of only allowing communication from another cloud resource or from a VPN connected to your on-premises network.
Virtual machines connect to the VNet with software based network interfaces, and just like their hardware based counterparts, they're also commonly referred to as NICs.
Network interfaces in Azure are resources that can be created independently and attached to virtual machines. They allow for static or dynamic private IP addresses, and static will ensure that the IP address will remain the same, even if you stopped or restarted the VM. Dynamic will be regenerated at startup via DHCP.
It's worth noting that you can have multiple network interfaces for the same VM, however you can't currently do that with the portal, so you're going to have to use PowerShell, the command line, or the REST API.
You might be wondering how the virtual machines inside of the VNet deal with DNS, and the answer is that by default the VMs are configured to use the Azure managed DNS servers, although you can use your own DNS servers. If a VM uses the Azure managed DNS, then it will be able to resolve the hostnames for the virtual machines in the same VNet to the private IP address of the primary network interface.
So, having a network interface allows for communication between the virtual machines inside of the VNet, however it doesn't help if you want to communicate with the virtual machine directly. For that, you need to assign a public IP address. A public IP address is an independent resource, and it allows you to create dynamic or static IP addresses.
Having a public IP address will allow you to access the virtual machine from the public internet. Actually, since public IP addresses are independent resources, you can also use them to connect to internet facing load balancers, VPN gateways, and application gateways.
So, that's a high level overview of some of the Azure networking components. Let's go through them all a bit more in depth. Let's start by creating a load balancer, which you saw provides internal and external load balancing.
The load balancer is a layer four load balancer, which, if you're not familiar with that term, refers to the OSI networking model, where layer four is the transport layer. In this case, that means that you can use it for TCP and UDP traffic. All right, I'm in the Azure portal, and I'll start by searching for the load balancer resource.
Now you can also find this under the networking section if you wanna take that direct route. Then I'll select the load balancer from the list. And it's going to require a name. So I'll name this 534-PLB for public load balancer. It's set to public by default, though you can see that if I change it to internal, it requires a VNet.
Okay, I'll switch it back to public. The public option needs a public IP address so that it's accessible to the outside world. So I'll create one, and I'll name it 534-PLB-PIP and I'll make the IP address static. Having a static IP address is going to allow me to use this in a DNS record, should I want to.
All right, I don't have any resource groups, so I'm going to create one, and I'll name it 534. And I'm going to leave this set to the location of East US. All right, once this is created I'll have a load balancer, however, it's not going to be doing anything. Now if I drill into this we can look at some of the settings.
So the first setting here is the front end IP pool. These are the public facing IP addresses that map to this load balancer. Then there's the back end pools, and this is where you can add pools of virtual machines to distribute requests to.
Without at least one pool, the load balancer won't have anything to do, so let's create a pool. I'll give it a name of 534-VM-POOL. And I'll save that. Now I have a pool, however there's nothing in it. If I try and add something, Azure tells me that there are no availability sets to add. Any VMs that you add to a pool have to be in an availability set.
Since I don't have any VMs or availability sets, let's go create them. I'll go to Compute, and then I'll add an Ubuntu 16.04 LTS instance. And it requires a name, so I'll use 534-DEMO-VM-1. And then I'll provide a user name. And I'm just going to use my name. And it needs a password, so I'll add a password here that meets the minimum requirements. And then I just need to retype it in the confirm text box.
All right, now I'll add to the existing 534 resource group and click OK. I'm going to use a standard, so I'll select that. And click Next. On this optional features blade, everything is fine the way it is except the availability set. Since I don't have an availability set already, I need to create one and I can do that here in line. I'm going to name it 534-VM-POOL-SET.
And notice I can change the fault and update domains. You can have up to 20 update domains and three fault domains. I'll just set these back to the defaults. And now that I'm done, I can click OK and move onto the verification blade and then finally create my VM.
Okay, with this done I'll have an availability set and a VM to add to the load balancer. So I'll go back to the resources, and click on the public load balancer. I'll click the back end pools, and start adding the availability set. And then I'll add the VM, and I'm going to save this.
Okay, this isn't the only thing required to get the load balancer to work. The load balancer only wants to send requests to healthy instances, and it determines health with health probes. So that's what I need next. Okay, things are still updating, so I'm gonna have to wait just a moment. Okay, there it is. So I'll click on the Add button, and then I'll create a health probe with a name of 534-WEB-SERVER-HEALTH-PROBE.
Okay, I'll use the HTTP option for the probe, however you could also use the TCP, should you need that. And I'll leave the defaults as they are. The port is 80 and the check interval is every five seconds, and the instance is considered unhealthy if it fails that check twice in a row. Alright, clicking OK here, and this is going to be created.
With this done I also need to add at least one load balancing rule to tell the load balancer which front end IP address should be directed to which back end pool, and which health probe to use. This allows you to use a single load balancer to direct traffic to different pools, making this rather versatile.
These defaults are already what I need, so I'm going to leave them. However, it's worth noting that, should you need sticky sessions, you can change this session persistence setting. I'll click OK and this is going to get created. Now, once this is done the load balancer will be able to direct traffic from the internet to the pool of virtual machines, which currently only consists of the one machine.
So, if I grab the IP address from the overview blade, I can browse to that address and see what happens. Now as I'd expect, it tries to connect and it can't establish a connection. Now there are two reasons why. The first is that the network security group that was auto-created doesn't have port 80 open.
So I'm going to drill into the security group, and I'll add an incoming rule for port 80. I'm going to set the service to HTTP and now the rest of these settings should be okay.
Here's a rundown of the different settings. The name is a unique identifier for the rule. The priority is a number that needs to be unique and determines which rules might override other rules. The way it works is that the larger the number, the lower the priority. Azure will pick the rule with the highest priority, remember, that means the lowest number, and it's going to apply that rule to the given traffic.
Next is the source which determines the traffic source. The options here are Any, which makes it a general rule, CIDR block, which is an IP address range or pattern, or Tag, which lets you choose one of the default tags. When you're creating rules, it helps to be able to reference common IP addresses with a shortcut, so Azure provides three default tags.
Default tags are labels used to identify categories of IP addresses. For instance, the default tag that would apply a rule to the entire address space of your network is the virtual_network tag. Or the azure_loadbalancer tag references Azure's infrastructure load balancer.
And then there's the internet tag, which is the tag for the IP address space outside of the virtual network, which is reachable by the internet. The service defines what type of traffic rule to apply to.
There are two ways to do this. Either select the service from the drop-down, such as FTP or HTTPS, et cetera, or the tell the rule specifically which protocol and port range you want to use for this rule.
Okay, and then there's the action here which determines if the rules should allow or deny traffic. So that's a basic rundown of the different settings.
Alright, I'm going to click OK to create this. And if I was to reload the browser now, what you'd see is it's still going to timeout, and that's because there isn't a web server listening on that virtual machine. So I'm going to SSH in, and I'll install the Apache web server.
Now this is a fairly trivial thing to do if you're not going to configure it for production-ready environments. So I'm going to connect and I'll run sudo apt-get install apache2. Okay, this is going to take a moment. And there it is. Now, if I reload this browser, you'll see that the Apache landing page is being displayed. Perfect, and there it is.
Now if I was to add another virtual machine that didn't have a web server on it, the health probe for the virtual machine would fail, and that load balancer wouldn't send any traffic to it. And if I was to add one that was healthy, then the traffic would be distributed across the two.
Alright, let's wrap up this lesson here. Now, this demo was from the perspective of the load balancer, and that's because using a load balancer requires many of the components that were previously covered.
There are a lot of components and settings to consider when it comes to virtual networks in Azure, and one of the components that wasn't covered in this lesson is the user defined routes.
So that's what we'll cover in the next lesson. And if you're ready to keep going, then let's get started in the next lesson.
About the Author
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.