In this lesson, we will discuss automatically scaling computer resources with a focus on horizontal scaling.
We will be using Azure Monitor’s built-in autoscale feature to scale up and down compute resources based on metrics.
We will explain what Scale Sets are, and how to use them to organize your VMs as large groups instead of individual machines. We will also configure some using Azure Service Fabric.
We will cover what autoscaling is and where it is available in the Azure system. You will learn how to set your minimum and maximum number of cores used for autoscaling.
Finally, we will discuss how Azure Functions can be used to replace VMs and containers entirely.
Azure has multiple systems for automatically scaling computer resources. Our focus will be on horizontal scaling - namely, adding additional compute resources instead of trying to switch to a larger instance.
The simplest approach with Azure is to use Azure Monitor’s built-in autoscale feature. This lets you automatically scale up and down compute resources based on metrics.
Azure VM’s also have a concept known as ‘Scale Sets.’ A Scale Set is simply a group of VM’s that can be given autoscaling rules. They are similar to AWS autoscale groups and make for a handy way to organize and think about your VM’s in terms of large groups instead of individual machines. Scale Sets can also be configured through Azure Service Fabric. Each node type in a Service Fabric cluster can be a separate VM scale set, thus allowing each node type to scale up or down independently.
Azure autoscaling is also available for the Azure App Service. Autoscaling is actually built right in as an app-level setting. It will allow you to scale based on a designated metric. You select a value and also use a slider to set your maximum and minimum number of instances. Similarly, Azure Cloud Services have a setting for autoscaling. The difference here is that scaling is based on number of cores being used. Depending on your subscription you may have a limit for a maximum number of cores for autoscaling. You can set this up in the portal by clicking on the scale tab in the portal and setting it to ‘automatic.’
Finally, probably the most bleeding-edge approach to autoscaling, would be to cut out VM’s and containers entirely and just use Azure Functions. If you can adopt a serverless paradigm wherein your app logic is defined using Azure Functions, then you won’t need to think about autoscaling at all. The Azure platform automatically allocates computer resources as necessary to any code running as an Azure Function in your account.
We won’t focus much on non-Azure autoscaling systems. I’ll instead just take a quick second to remind you that such systems exist. Kubernetes, for example, has support for horizontal scaling of ‘pods’ based on metrics. You can also integrate your configuration management tools like Chef with your monitoring to create autoscaling logic.
So now that we have a good understanding of what autoscaling looks like in Azure, it is time to broaden our automation scope. In the next lesson we will discuss how to automate arbitrary tasks in Azure using a variety of different approaches. See you there.
Jonathan Bethune is a senior technical consultant working with several companies including TopTal, BCG, and Instaclustr. He is an experienced devops specialist, data engineer, and software developer. Jonathan has spent years mastering the art of system automation with a variety of different cloud providers and tools. Before he became an engineer, Jonathan was a musician and teacher in New York City. Jonathan is based in Tokyo where he continues to work in technology and write for various publications in his free time.