1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Introduction to Azure Virtual Networking

Creating a v-net Part 2

The course is part of these learning paths

AZ-103 Exam Preparation: Microsoft Azure Administrator
course-steps 15 certification 6 lab-steps 8
Developing, Implementing and Managing Azure Infrastructure
course-steps 10 certification 7 lab-steps 2
3 Pillars of the Azure Cloud
course-steps 4 certification 4 lab-steps 1
more_horiz See 1 more

Contents

keyboard_tab
Intro
1
Course Intro
PREVIEW1m 29s
2
Overview
PREVIEW4m 36s
Summary
play-arrow
Start course
Overview
DifficultyIntermediate
Duration54m
Students2970
Ratings
4.8/5
star star star star star-half

Description

Introduction to Azure Virtual Networking (ARM)

Cloud based virtual networks are software based, and they provide a standard way to organize and isolate Virtual Machines running in the cloud. A virtual network controls addressing, DNS settings, security policies, and routing tables.

Virtual Networks which are commonly referred to as “v-nets”, are isolated from one another. Due to the isolation you can create networks for development, testing, and production that use the same address blocks.

To allow even further isolation, v-nets support subnets, which allow you to segment the network. Subnets will allow you to break out VMs by their purpose, which is common with tiered architectures. For example, if you have an application broken out into front-end and back-end tiers, then you might want to create two subnets, one for the front-end VMs, and another for the back-end tier.

If you're familiar with traditionally networking componets then you're going to feel right at home working with v-nets. So, if you're looking to learn more, then start in on the first lesson!

Azure Virtual Networking (ARM)

Lecture What you'll learn
Intro What will be covered in this course
Overview The componets of virtual networks
Creating a v-net Creating a virtual network part 1
Completing the v-net Creating a virtual network part 2
Application Gateway The application load balancer
User defined routes Using route tables
Traffic Manager DNS based load balancing
Hybrid networking VPNs and express route
Final thoughts Wrapping up the course

 

Transcript

Welcome back! In the previous lesson we started building out a virtual network, and we’re going to finish that up in this lesson.

We left off needing to set up the reverse proxy on the front-end VM.
So let’s start setting that up now. I’ll go to all resources, and find the front-end VM, and select it.

On the overview blade, I’ll click connect, and then I’ll copy this ssh command.
Just like before, I’ll say yes...then I’ll enter my password.

Okay, now that we’re logged in, I’m going to install nginx.

For that we can run “sudo apt-get install nginx”

Since this is a demo, and we’re not worried about locking this down, this will be fairly easy to configure.

Okay, with nginx installed, we need to create a config that will set the rules for the reverse proxy. To do that I’ll use nano to create and open a file that resides under /etc/nginx/sites-available...and it would help if I actually type the word “nano”... and I’ll name the file “webapp.conf”
Like before I have the contents already copied to my clipboard. So I’ll paste that in here.

The only things requires, is that I set the IP address for the application server. Now, we could do that by setting the IP of that app server itself, however the better option is to use an internal load balancer.

So, we’re now blocked on finishing up the configuration of the web server until we create an internal load balancer that will route traffic coming into the back-end subnet.

Let’s switch gears and put the web server configuration on hold, so we can create the load balancer. To do that you can click the load balancer option in the side navigation.

If you recall from the first lesson, I mentioned that there are public and internal load balancers. Since we want this to distribute traffic for the back-end subnet, this needs to be an internal load balancer.

Once you select internal, it requires a vnet to apply to, so I’ll select our “ca” vnet.

And with that set, I can select which subnet this applies to, which is the back-end subnet.

For the IP address, you could select static, or dynamic. We’ll leave it as dynamic for this demo.

I need to select a resource group, so I’ll set it to the “ca” group...and I’ll create this.

Or...maybe not. It looks like forgot something...ahh, I forgot the name. Let’s give it a name of “back-end-lp” and see if it’s happy now...and it is.

...
If I refresh the page, there’s our load balancer. I’m going to copy the IP address of the load balancer, because I’ll need to add that to the configuration file for nginx.
However we still need to configure this load balancer to send traffic to the back-end VM.

So, we need to click on the back-end pools...and then we can click add.

I’ll give it a name of back-end-vm-pool, and I’ll click Okay...actually I didn’t mean to click Okay, because I need to add the VM, but we can do that once this pool is created.

There it is, so let’s click on the back-end pool, and add a VM. You can do that by clicking the Add a VM link, and it’ll ask for an availability set. Remember way back I said that you need to have your VMs in an availability set in order to add them to load balancer, well that’s what you’re seeing here.

Let’s select the back-end availability set, and then we can select the back-end VM.

Notice it disables the selection of the front-end VM, because it’s in its own availability set.

Okay, with those set, we can click save. This can take a while, so I’m going to jump ahead.
Welcome back, changing load balancer settings can take a while, so while that was happening, I got some coffee. If you want some coffee too, feel free pause, and go make some...go ahead, I’ll wait.

Alright, with the back-end pool created we need to create a health check probe. The load balancer doesn’t want to send traffic to a machine that isn’t able to handle the request, so health checks allow it to determine which machines in the pool are available, and which aren’t.

Let’s give this a name of “back-end-health-check”

And then we’ll change the protocol to http, and we need this to check port 5000, instead of 80.

The default route is just what we need here. So we’ll leave that.

This check needs to know how often to check, and how many failed attempts constitute unhealthy. The default is to check every 5 seconds, and two failed attempts means the instances is unhealthy. In production, you’ll want to adjust this to meet your needs.

Okay, let’s create this, and then I’ll jump ahead to once it’s complete.

Welcome back, with that complete we’re ready to add a load balancing rule.

Here’s why load balancing rules are important. Azure load balancers will allow us to have multiple IP addresses assigned to a load balancer. As well as having multiple pools of virtual machines. The rule is determines which IP address maps to which virtual machine pool.

So, for this rule, it will say, whenever traffic comes into the load balancer on this IP address, on port...5000, direct it to port 5000 on a health VM in the back-end pool.

I want to clarify something, I’m using the term front-end and back-end for our subnets, however when the load balancer references front and back ends, it’s talking about something different.
To the load balancer, the front-end is an IP address that is used to send traffic to the load balancer. The back-end is the VM pool to forward the traffic to.

Okay, all the rest of the settings are good by default. However, take note of the “Session persistence” option, which allows you to use sticky sessions. So if you need to support stateful apps, you can use session persistence.

Let’s create this, and jump forward to once it’s complete. Welcome back!

With the internal load balancer in place we now have a single IP address that can distribute traffic to as many application servers as we need. So, let’s go back to the web server and edit the config file.

The terminal is still open, and so is nano, so I’ll replace this token here for “LOADBALANCER” with the IP address of the internal load balancer. Remember I copied that just after it was created.

This config will tell nginx to route traffic that comes in on port 80 to the web app running behind the load balancer.

So, let’s save this, and get nginx using the new config.

First, I’ll paste in this command to link the new config to the “sites-enabled” folder.

Next I’ll test the config to make sure that there aren’t any syntax issues. And it looks like we’re all set in that respect.

And now we’ll restart the service.

Okay, if we were to browse to the public IP address of this web server, what do you think will happen?

Maybe you already know the answer, however if not, take a moment to think about what we’ve done so far.

Let’s look at the diagram. We have our vnet, our two subnets, and a VM inside each, as well as an internal load balancer distributing back-end traffic. We set up the web app running on port 5000 and opened up port 5000 in security group.

Then we configured the web server as a reverse proxy, running on port 80.

Do you know the answer yet? Let’s try and run this in the browser and see what happens.

It’s trying to connect...and it’s just not connecting. The reason is that we need to open up port 80 for the front-end web server. So, let’s edit the security group for the front-end VM’s network interface.

Let’s drill down until we get to the network security group, and we’ll add an inbound rule. This will be an allow rule for HTTP traffic...and I’ll save this by clicking OK.

Now, if we reload the browser, we’re greeted with the default nginx page. What this means is that we can now connect to the VM on port 80. It also means that I forgot to disable the default site in nginx, so let’s do that now.

If you notice in the sites-enabled folder there’s a default file there. That’s why we’re seeing the default landing page and not the web app. So I’ll remove that with the rm command…

And now I’ll restart the service to make sure it picks up on the changes.

Now, if I reload the browser, instead of the default page, you can see the Cloud Academy Demo text.

Let’s look back at the diagram. We’re so close to having this thing completed. Right now, we have everything except the public load balancer. However, since our front-end instance has a public IP, we’re able to test everything else. So, the only thing left to do is to create the public load balancer and add the front end VM to it.

So, let’s click on load balancers and we’ll click add. Let’s name it “public-web-app-lb”...

I’ll leave it set to the default type of public, and I’ll give it a public IP address. This will allow it to be accessed from the internet. I’ll give the most original name you’ve ever seen...and I’ll select static for the IP type.

The only thing left is to set the resource group, so let’s set it to “ca” and create this.

Let’s jump forward a moment to when this has been deployed. There we go.

Now, we to need configure this just like we did with the internal load balancer. The only real difference is that this load balancer will be accessible to the outside world.

So, we’ll start by adding the VM that has nginx running into a new pool.

We need to select an availability set, so I’ll pick the front-end availability set, and now I need to select the front-end VM.

This will take a while, so again, I’ll jump forward.

Okay, welcome back. Now that the VM pool exists, we need a health probe. So let’s add a probe that will check to make that the web server is running on port 80. And let’s jump forward to when this has been created.

Great, now it’s time to add the load balancing rule. This will map the public IP address to the VM pool, and set the health check to use. The defaults for this are spot on, so all I need to do is add a name...perfect! Let’s create this.

Once again, you’re going to time travel while I have to take the slower path.

Alright, the final component to this entire solution is in place. Now it’s time to see if it all works.

If every is working, then we should be able to hit the IP address of this public load balancer, and see the text “Cloud Academy Demo”

So let’s try it out. I’ll click the copy button, and paste it into a new tab, and...voila!

Let’s look at the diagram one more time to see exactly what we’ve built. We have a public load balancer, that accepts web requests, and forwards them onto a healthy instance. In our there’s only the one instance, and it’s running inside the front-end subnet.

When that front end VM gets traffic on port 80, it directs it to the back-end load balancer, which directs it to the application server running in the back-end subnet.

And once the app processes the request, the traffic flows back through the network.

If you followed along, then congrats on creating all of this for yourself!

Now, in this demo, we have a web server tier and it’s just a reverse proxy. If you’re like me, then you’re continually looking for ways to cut out the management of servers and software. So how can we cut out the front-end tier?

Well, then we’re in luck, because Azure offers a service called Application Gateway and it’s actually going to be the subject of our next lesson. So, if you’re ready to continue tinkering with our network then let’s get started in the next lesson!

About the Author

Students36959
Courses29
Learning paths15

Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.

When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.