1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Introduction to Azure Virtual Networking

Creating a v-net Part 1

The course is part of these learning paths

AZ-103 Exam Preparation: Microsoft Azure Administrator
course-steps 15 certification 6 lab-steps 8
Developing, Implementing and Managing Azure Infrastructure
course-steps 10 certification 7 lab-steps 2
3 Pillars of the Azure Cloud
course-steps 4 certification 4 lab-steps 1
more_horiz See 1 more

Contents

keyboard_tab
Intro
1
Course Intro
PREVIEW1m 29s
2
Overview
PREVIEW4m 36s
Summary
play-arrow
Start course
Overview
DifficultyIntermediate
Duration54m
Students2948
Ratings
4.8/5
star star star star star-half

Description

Introduction to Azure Virtual Networking (ARM)

Cloud based virtual networks are software based, and they provide a standard way to organize and isolate Virtual Machines running in the cloud. A virtual network controls addressing, DNS settings, security policies, and routing tables.

Virtual Networks which are commonly referred to as “v-nets”, are isolated from one another. Due to the isolation you can create networks for development, testing, and production that use the same address blocks.

To allow even further isolation, v-nets support subnets, which allow you to segment the network. Subnets will allow you to break out VMs by their purpose, which is common with tiered architectures. For example, if you have an application broken out into front-end and back-end tiers, then you might want to create two subnets, one for the front-end VMs, and another for the back-end tier.

If you're familiar with traditionally networking componets then you're going to feel right at home working with v-nets. So, if you're looking to learn more, then start in on the first lesson!

Azure Virtual Networking (ARM)

Lecture What you'll learn
Intro What will be covered in this course
Overview The componets of virtual networks
Creating a v-net Creating a virtual network part 1
Completing the v-net Creating a virtual network part 2
Application Gateway The application load balancer
User defined routes Using route tables
Traffic Manager DNS based load balancing
Hybrid networking VPNs and express route
Final thoughts Wrapping up the course

 

Transcript

Welcome back! In this lesson we’re going to head into the portal, and create a virtual network.

Let’s take a look at what we’re actually going to create. Here’s a diagram showing how we’re going to set things up.

Overall, this is going to be a basic web application with a 2 tiers; there’s an app tier and web server tier. The application will be responsible for handling business logic, and the web tier is just web server that will be a reverse proxy for the app.

To make this a simple demo to follow along with, we’re going to use a basic Python based web app. Consider it the hello world of web apps.

We’re going to split the VMs out into their own subnets, one is for the front-end, which is where the web server will be, and the other for the back-end where the app will be.

Since this is a web application, and we want users to be able to use it, we need a public endpoint; for that we’ll use a public load balancer.

And then to distribute traffic between the two subnets, we’ll use an internal load balancer.

So, let’s run through what a it would look like to use this. A user would make a request and that would be directed to the public load balancer. Then the load balancer will forward the request on to a healthy web server instance. We’ll only have the one, however you could have an entire pool of servers, in a production app.

The web server, which is serving as a reverse proxy is going to send the request on to the internal load balancer, and just like the public load balancer, it's going to send the request to a healthy back-end instance.

The application server is going to respond to the request and the response will go back through, all of this, until it gets to the end-user.


So, knowing what we’re trying to build let’s start in with the virtual network. Here we are, in the portal, so let’s start by clicking the virtual network link.

Then click the Add button.

I’ll name mine CA for cloud academy. I’ve been told I’m really good at naming things, and think that shows through here. (haha)

Okay, the address space here is in CIDR notation, and even if you don’t understand what the means exactly, you can see that Microsoft has given a bit of a hint, by adding this bit of info here that shows how many addresses will be available on this vnet.

Now, we’re not going to cover CIDR notation in this course, however on the screen is a link to an article that should help to explain it should you be interested.
https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing

For now, the things to know about this is that, the goal of using this CIDR notation is to define the address space that the vnet will use; also the number after the slash needs to be between 0 and 32, and the higher the number the fewer the addresses.

With that in mind, I’m going to use /22 which will allocate one thousand and twentyfour addresses.

By default when you create a vnet, it automatically adds a default subnet. I’m going to change the name to front-end.

This next field is the subnet range, and what that means is that this field determines how many addresses out of the total available in the vnet will go to this subnet. Right now, it’s set to use 256 out of the 1024 allocated to the vnet.

Since that’s more than enough for this demo, I’ll leave that.

Next, I need a resource group, so I’ll add one, and then I’ll create this by clicking “Create.”

I’ll give this a moment to create, and once it’s done we can add another subnet for the back-end servers.

This page doesn’t auto refresh, so I’ll click refresh, and there it is, so let’s open up the vnet blade.

Clicking into the subnets, you can see we have the front-end subnet. So, if you recall from the diagram, we need to create a vnet, and two subnets, one for the front-end and one for the back-end servers.

So let’s create another subnet by clicking the Add button.
And the defaults for this a fine, though we need a name, so I’ll call it back-end and create Ok.

Now, once this is done we’ll have our vnet with the two subnets. Let’s add the VMs next.

This is going to use an Ubuntu 16.04 LTS instances, and so I’ll fill out this form, there’s nothing really interesting here.

I’ll create this in the same resource group we used before, which is the “ca” group.

I’m selecting a standard DS1_V2, so that we have load balancing, and I’ll click “select”

Pay attention to the virtual network section on the next blade, as it loads.

Notice that when the blade fully loaded it auto selected the network. I’m not exactly sure if the logic that populates that fetches this because it’s the only vnet in the region, or if it’s more complicated than that. Though, in this case, it’s just what we need.

Since this is the back-end server, we do need to change the subnet to the back-end subnet.

By default the instance is assigned a public IP address. Now, in a production environment, this might not be what you want. However I want to be able to connect into the instances via SSH so that I can configure the web server and app server. So, I’ll keep the public IP address.

I’ll leave the network security group as it is, and in this demo, that means that the security group will be applied to the network interface for this instance. Later on we’ll circle back to the security group.

Next, we need to add an availability set, and that’s because the load balancer requires the VMs to be in an availability set.

So, I’ll name this “back-end-as” and I’ll leave the default fault and update domains. I’m not going to cover those here, since the focus for this course is on the networking side of things.

Okay, with that done, let’s take a look at the monitoring options, and they seem fine for this demo. So, I’ll click OK, and it’ll load the final blade. This will take just a second to validate, and there it is, so, I can click OK.


Okay, while this is being deployed, let’s run create a front-end VM. Since this is going to be almost identical, I’m going use the magic of video editing to speed this up a bit.

The only thing different on the first blade is the name….the second VM size is the same…
The vnet and subnet are correct by default.
We need an availability set...which is going to use all of the defaults.

And...that’s it!


I’m not going to make you wait for this VM to be deployed, so what I’ll do is jump forward to once this is complete.

We’re back, and the VM is complete, so now I’m going to connect into it and add a very simple web application.

So, I’ll click on the connect button at the top of the overview blade and copy the ssh command, because I’m lazy.

Then I’ll open up a terminal window and paste it in. Since this is the first time connecting in I need to approve this connection by typing the word ‘yes.”

And then I need to enter my password. This is the password that I added when I created the instance.

I mentioned previously that we’ll be using a simple Python app, and what I didn’t mention is that the app will use the Flask python library. So, in order to install Flask, I’ll first install the Python package manager called pip.

So I’ll run “sudo apt-get install python-pip”

And then I’ll approve the installation. This doesn’t take long, however I’ll fast forward this just a bit.

Great, with this done, I’ll install Flask by running “sudo pip install Flask”
Whenever I do something that isn’t a best practice, I try and call attention to it, and this is one of those times. Doing this is fine for a demo, however if you’re interested in Python development, you’ll want to use something like virtualenv, which allows you to create isolated python environment.

Okay, with this done, I’ll change directories into the /var directory.
And I’ll make a new directory for the application named “app.”

And, it looks like I forgot to sudo that, so I’ll rerun the command with sudo, and then I’ll cd into the app directory.

Now we need to create a file for the application code, so I’ll use nano, which is a crude command line text editor. I’ll issue the nano command and pass in the name of the file, which is app.py.

I already have some code copied to my clipboard, so I’ll paste that in, and then save this file.

Here’s a rundown of the important parts of the code. This main method will return the string “Cloud Academy Demo” when anyone browses to the default route.

At the bottom, this call to the run method runs this on port 5000.

Alright, now I can close out and save.

To make things easier for the demo, I’m going to run this code from the command line. What that means is that if the app stops for any reason, then it needs to be manually restarted.

So, I’ll run it by typing python app.py.

And you can tell by the output, it’s not listening on port 5000.

Now, in theory we could browse to this in the web browser, however port 5000 is not yet open in the network security group, so let’s open that up.

I’ll do that by drilling into the network interface, and opening the network security group blade.
Now, once this loads I need to drill into the inbound security rules blade.
...
Okay, great, now I’ll click add, and I can create a new rule that allows traffic to flow through port 5000.

Now that we’re here, let’s cover network security groups a bit more. Network security groups are going to serve as basic firewall. For ARM deployments, you can apply security groups to either a subnet or to a network interface.

Security groups allow you to create rules for incoming and outgoing traffic. Let’s go through the properties of a rule.

The first here is the name, which is pretty standard to all resources.

The priority is a number that needs to be unique, and it determines which rules might override other rules. The larger the number, the lower the priority. Azure will pick the rule with the highest priority (remember, that means the lowest number) that applies to the given traffic.

Next is the source, which determines the traffic source. The options here are
“any” which makes it a general rule, so that means this rule applies to all traffic,

“CIDR Block”, is an IP Address range or pattern, this allows us to filter on ranges of traffic. For example, we could create a rule that only opens up a port if the request comes from one of our subnets.

The third source option is “Tag”, which lets you choose one of the default tags.

When you’re creating rules it helps to be able to reference common IP addresses with a shortcut, so Azure provides three default tags.

“Default tags” are labels used to identify a category of IP addresses. For instance, the default tag that would apply a rule to the entire address space of your network is the VIRTUAL_NETWORK tag.

Or the AZURE_LOADBALANCER tag references Azure’s Infrastructure load balancer. And the INTERNET tag is for the IP address space outside the virtual network and reachable by public Internet.

Once you have your source, you need to tell the rule which ports you want the rule to apply to. For that you use the service.
There are two ways to do this: you can either select the service from a drop down, such as “FTP” or “HTTPS”, or tell the rule the specific protocol type and port range that the rule applies to.

And finally there’s the action, which determines if the rule should allow or deny traffic.
If set to allow, then any traffic from the source IP address will be allowed to flow through any defined ports. If set to deny, then it’s the opposite.

What we need for our solution is to allow traffic to port 5000, from at a minimum the front-end subnet. However let’s leave this open to any traffic to make testing easier.

Okay, this will take a moment to create, so I’ll fast forward a bit to once it is.

Welcome back, the security rule was created, and now I’m here on the overview blade for the back-end VM that we just set up. And I want to make sure everything is working, so I’ll try browsing to the app.

So, if I copy this, and select go to...and I’ll make a quick edit, to add port 5000...perfect. So there’s our app in all its glory. If look back at the terminal, you can see the request that we just made.
Alright, this is going to be out back-end application, and the front-end will be a web server that will serve as a reverse proxy.

Let’s take a moment to look back at the original diagram, and see how we’re doing.
Okay, so far we’ve created the vnet, a front-end and back-end subnet, VMs for the front-end and back-end, and we configured the back-end VM to run a simple web app.

However we still need to configure the VM for the front-end, and create the two load balancers, and that’s we’ll do next. However, let’s do that in the next lesson.

So, if you’re ready to wrap up this demo, then I’ll see you in the next lesson!

 

About the Author

Students36786
Courses29
Learning paths15

Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.

When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.