1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Azure Resource Manager Virtual Machines

Demo - Implement a VM Scale Set with Autoscaling

The course is part of these learning paths

AZ-103 Exam Preparation: Microsoft Azure Administrator
course-steps 15 certification 6 lab-steps 7
Developing, Implementing and Managing Azure Infrastructure
course-steps 10 certification 7 lab-steps 2
3 Pillars of the Azure Cloud
course-steps 4 certification 4 lab-steps 1
more_horiz See 1 more

Contents

keyboard_tab
Overview of the course
1
2
Summary
play-arrow
Start course
Overview
DifficultyBeginner
Duration2h 17m
Students3351
Ratings
4.9/5
star star star star star-half

Description

Azure Resource Manager Virtual Machines

Virtual Machines are a very foundational and fundamental resource in Cloud Computing. Deploying virtual machines gives you more flexibility and control over your cloud infrastructure and services, however, it also means you have more responsibility to maintain and configure these resources. This course gives you an overview of why use virtual machines as well as how to create, configure, and monitor VMs in Azure Resource Manager.

Azure Resource Manager Virtual Machines: What You'll Learn

Lesson What you'll learn
Overview Overview of the course and the Learning Objectives
What is a Virtual Machine? Understand what are Azure Virtual Machines and what workloads are ideal for VMs
Creating and Connecting to Azure VMs Learn to deploy Windows and Linux VMs as well as how to connect to these VMs
Scaling Azure Virtual Machines Understand VM scaling, load-balancing, and Availability Sets in Azure Resource Manager
Configuration Management Understand the basic concepts of Desired State Configuration and the options available to Azure VMs
Design and Implement VM Storage Gain an understanding of the underlying Storage options available to VMs as well as Encryption
Configure Monitoring & Alerts for Azure VMs Learn to monitor VMs in Azure Resource Manager as well as configure alerts.
Summary Course summary and conclusion

 

Transcript

In this demo, we will create a VM Scale set with 5 VM instances and configure Autoscaling. So here we are once again in the Azure Portal. In the search box at the top we’ll type “scale sets” and select the Virtual machine scale sets option. Let’s click Add to create a scale set. We’re presented with very basic options. Let’s call our scale set “vmss.” You may choose Windows or Linux. We’ll leave it as the default option of Windows. Let’s specify a username and password.

The next option says “Limit to a single placement group” True or False. If you recall from the Key Features, we mentioned that you can have whole scale sets act as a single unit to where you can effectively scale entire scale sets. This is done because scale sets live in a bigger container called a Placement Group and you can have your scale set span across multiple Placement Groups similar to fault domains. We’ll leave this option as True which essentially says that we’d like our scale set to stay within a single Placement Group.

Next let’s create a new Resource Group. If you choose to use an existing Resource Group, just know that the Resource has to be empty and not contain any other resources. Let’s call it vmssRG and leave the default location as “East US.” Click OK.

The next step is to configure the definition properties of our VM instances. Recall that all VMs in a scale set are identical, they share the same OS and VM size. At the top we’re creating a new Public IP address which will be the frontend connection endpoint to our Azure Load Balancer which will automatically be provisioned during this process. You see, we will have several VM instances as part of our scale set, but instead of putting a Public IP on each VM as we’ve done previously, we’ll instead use an Azure Load balancer with a single Public IP and perform Network Address Translation in order to access the attached backend VM instances in our scale set, which we’ll see later.

We need to provide a unique DNS domain label. Let’s just call it “vmss123.” Let’s set the OS to 2012-R2-Datacenter. We want to pre-provision 5 VM instances. Let’s set our VM instance size to the Standard D1_V2 VM size. Again we are not using Managed Disks, so we’ll leave the default of Unmanaged.

Now we get to the section on Autoscale which is ‘Disabled’ by default. This is where it gets interesting. We’ll select “Enabled” and now we’re presented with 6 options: The first two options to configure Autoscaling, the next two to configure “Scale Out” and the last two to configure “Scale In.” Even though we’re pre-provisioning 5 VMs, let’s specify that we want to start off with only 2 running VMs by setting the Autoscale minimum number of VMs to 2. We’ll change the maximum number of VMs to be Autoscaled to 6.

For Scale Out, let’s set the CPU percentage threshold to 80% which says when 80% CPU utilization is hit across our scale set then we scale out. And the next option lets us specify by how many machines to scale out, let’s say 2 at a time.

Finally for Scaling In, let’s specify the CPU percentage threshold to 30% for when demand decreases and since we want to be not scale back too fast, well set the number of VMs to decrease during Scale In to 1.

Click OK. A validation takes place on our configuration and we are presented with a Summary screen. All looks good so we’ll click “Create” and the deployment begins. This can take quite a while as it’s building several resources including storage, networking and VMs, so I’ll see you in a bit.

We’re done! We’re currently looking at the Resource Group we’ve just created for our VM Scale Set resources. Let’s talk about what we’ve got. The first thing you’ll notice is that we have five storage accounts created. This is because we specified that we want pre-provision five VM instances. And although we only said we only wanted to use two VM instances initially, you can see that the resources for all five are provisioned just not yet in use. Second, you can see what our virtual network. Also we have Public IP resource. This Public IP resource is the public IP address attached to the Azure Load Balancer which we’ll come back to in a second. Finally we have our actual VM Scale Set, so let’s take a look.

We’re presented with a nice CPU percentage graph across our VM Scale set. We can see the VNet our VM instances are associated with as well as the VM instance Size. We can also see our Public IP address used by the Load Balancer, but it’s nice they include it here as well. Let’s click on Instances, and here we see we only have two running instances just as we planned. As we scale out, we will see more instances here.

Let’s take a look at our Scaling options. You can see that our VM instances all share the same VM size of Standard D1_V2. Remember that scaling up means increasing the VM size. Therefore by simply increasing the VM Size for all VM instances, we’ve effectively Scaled up. Next we have the current number of instances we currently have which is 2. We can easily change this number manually and Save the changes. Next we have our Autoscaling options. We can see that it’s enabled, has a name of “cpuautoscale” and in the Profiles section we can see the different rules that specified when to scale Out and when to Scale in.

We’ll come back to Scaling but for now let’s click Operating system. Here we can see our OS and image information. You also notice that all our VM instances have a configurable computer name prefix set and have the Windows Azure VM Agent deployed along with automatic updates enabled for the VM.

Before we play with Scaling let us first discuss how we connect to our currently running instances. Let’s go back to our Load Balancer. From the Overview screen you can see that we have a the Public IP address, two NAT rules and our backend pool consists of 2 virtual machines at the moment because we have two running instances in our scale set. Let’s click Frontend IP pool. Here we can see the Public IP associated with the Frontend Pool and we get the option of adding additional Public IPs. Selecting BAckend Pools we can see our two backend pools along with their Private IP addresses associated with the actual instance.

There are many other things we can look at with respect to Load Balancers but for now let’s focus our attention on the Inbound NAT rules. These rules will allow us to connect over the internet to our Load Balancer and with a little Network Address Translation we’ll connect to our running VM scale set instances. Take a look at the Destination column. You’ll notice that every instance shares the same Destination IP which is the Load Balancer Frontend IP. But if we used the same IP then how do we get to each individual instance? In the service column you’ll see that our first instance connects to a custom port and our second instance connects via a different custom port. But if you know the Remote Desktop Protocol (RDP), then you know that RDP connects over port 3389. Let’s click on the first rule instance. At the very bottom we can see the Target port is in fact port 3389. This address translation is what allows us to connect.

So let’s connect to our first VM instance. Let’s launch mstcs.exe or our RDP connection. Next we will use our Azure Load Balancer Public IP, however let’s add a specific port of one of our instances and click Connect. We enter our configured login information, accept the certificate and now we’re connected. If you notice in Server Manager, our Computer name correctly shows the computername prefix followed by a number that corresponds to the instance ID.

Excellent, now let’s see if we can scale our VM. Let’s go to the Scaling option and select one size higher than our current size. But after select Standard D2 v2, we get a warning message that says “The upgrade policy is set to Manual for this scale set. After applying this change, you will need to manually upgrade instances to the latest model to start using the new size.” This means that new VMs will have the new size, but we must first manually upgrade our current running instances. Let’s hit “Save” to update our VM scale set. When we look back at our Instances you’ll notice that the “Latest model” column shows “No” because have not yet upgraded the running instances to the latest model definition. Let’s do that now. Select both instances, and click “Upgrade.” We typically wouldn’t want to upgrade all running instances at once, but for demo purposes it’s okay. This upgrades our two running instances to new VM size that we’ve configured. You’ll also notice the Status changes to “Updating (Starting)”. After a few minutes, we’re done and we’ve successfully Scaled Up our VM Scale set.

Now let’s Scale Out. We have Autoscaling which would automatically Scale out our VMs based on a preset schedule or the Autoscale rules which we’ve configured. Much of these rules are configurable through PowerShell or the Azure CLI for more flexibility than what we have in the Portal. But for now let’s manually Scale out to see how this works. Let’s increase our number of instances to 4 and click “Save.” Our VM scale set is on the fly spinning up equivalent VMs as our initial VM as well as auto configuring the NAT rules on the Load Balancer. It looks like we are going to have 5 VMs spin up, but again Azure knows to automatically scale out gracefully as it’s working across different fault domains and update domain underneath. But as you can see now that we’ve completed our Scale Out we now have 4 running instances. Similarly we can scale down.

Let’s take a quick look at the Load Balancer Inbound NAT rules. Here we can now see additional rules with corresponding unique ports for connecting to each individual instance. Remember that also if our VM scale set has an average CPU of less than 25% for 5 minutes, our VM Scale set will decrease the number of running instances by 1. After 5 minutes of waiting we now automatically have 3 running instances rather than 4. When we view the “Activity Log” of our VM Scale set we can see that a “Scaledown” action automatically took place.

At this point you’re probably wondering about the difference of the Availability set we created earlier and the VM Scale Set we just created. Well first, a VM Scale set is an implicit Availability set with 5 fault domains and 5 update domains. In fact, Availability sets and VM Scale sets can exist on the same virtual network. However, unlike Availability sets where we can have all different types of VMs and VM sizes within a set, VMs in a Scale set are all identical and have identical sizes. This is why you have to pre-provision VMs in an Availability set since resources are unique, but no pre-provisioning of VMs are necessary in an VM Scale set.

This brings up a great discussion about how to use these technologies together. It’s a best practice to put your main server VMs which have a unique configuration in an Availability set and then put all your workhorse VM instances in a Scale set. In other words, when thinking about the architecture of your application or service together with the infrastructure, all the unique parts of your application infrastructure that need specific resources can be deployed within an Availability set in order to gain the high-availability, and then the non-unique parts of your application infrastructure can be deployed within a VM Scale Set since it’s only a matter of increasing or decreasing capacity using the same building blocks.

About the Author

Students5073
Labs1
Courses3
Learning paths1

Chris has over 15 years of experience working with top IT Enterprise businesses.  Having worked at Google helping to launch Gmail, YouTube, Maps and more and most recently at Microsoft working directly with Microsoft Azure for both Commercial and Public Sectors, Chris brings a wealth of knowledge and experience to the team in architecting complex solutions and advanced troubleshooting techniques.  He holds several Microsoft Certifications including Azure Certifications.

In his spare time, Chris enjoys movies, gaming, outdoor activities, and Brazilian Jiu-Jitsu.