CloudAcademy
  1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Design Microsoft Azure Infrastructure and Networking for Azure 70-534 Certification

Compute Part 2

Contents

keyboard_tab
Introduction
1
Intro1m 41s
Summary
8
Summary1m 46s
play-arrow
Start course
Overview
DifficultyIntermediate
Duration57m
Students550

Description

This is the first of six preparation courses for the Architecting Microsoft Azure Solutions 70-534 certification exam. By the end of this course, you will have gained a solid understanding of Azure data center and VPN architecture. We will cover Azure’s use of Global Foundation Services for its data centers, virtual networks, Azure Compute (IaaS, virtual machines, fault domains), VPNs, and ExpressRoute. This session will also feature a high-level discussion of Azure services (load balancing options, Traffic Manager, and more).

Transcript

Welcome back. In this lesson we'll pick up on our discussion of compute resources in Azure. We're going to talk about configuration management and auto-scaling.

Let's get started back with configuration management. In order to facilitate using configuration management tools on Azure VMs, we can use the VM agent, which is installed by default on both Windows and Linux VMs in Azure and allows OS specific extensions to be added to the VM.

In the configuration management world there are a few stand out tools. Chef and Puppet are among two of them, which have existed for quite some time. Chef and Puppet are both widely used in the Linux world.

However, Microsoft has come out with an extension to PowerShell called DSC , which stands for Desired State Configuration. PowerShell DSC is a way to declaratively specify the state of a set of virtual machines, which we generally call nodes, with a configuration file that contains a number of resources, and then we have that state applied to a given machine. It supports many different types of resources, some of which are shown here. For example, we could use DSC to ensure MongoDB is installed and configured whenever the script is run, and to create a user account with permissions to access the database. You can also create custom resources yourself through a PowerShell module containing a script. DSC allows us to repeatedly run the same script and it won't reinstall the same component if it's already been applied. DSC will ensure that all of the configuration steps are applied correctly. And it will even tell you if the configuration has drifted from what it should be since the last time it was applied. The local configuration manager, abbreviated LCM, is the engine that DSC uses to facilitate interaction between resources and configuration. LCM regularly polls the system using the control flow implemented by the resource to ensure that the state specified in a configuration is maintained.

DSC added the configuration keyword to PowerShell. And underneath that can specify which nodes the configuration will apply to. Here we have a script that will ensure that FTP is enabled on the VM.

Azure also has custom script extensions as a means to execute code on a VM to facilitate configuration. You can apply the extension when creating the VM or when the VM is running.

With Cloud Computing developers tend to have the need for a greater knowledge of the architecture used by the virtual infrastructure. And this gives them a better understanding of the operational tasks required to deploy the application that they're building. The gap between development and operations is narrowing. Cloud Computing is requiring developers and operations to enter into an agile world where things change more frequently. That said, developers and operations need to collaborate more and that's kind of what dev-ops is based on.

In order to help developers better convey infrastructure requirements to operations engineers, developers can use Azure resource manager templates, also referred to as ARM templates. ARM templates are a formal template language expressed as a json format that describes the services in terms of properties and service dependencies, and even allows for the use of parameters to make for higher reuseability of our templates. ARM templates are a description language that doesn't require imperative code. The resource manager will know how to deploy any single service and it will know which order the dependency should be installed in. So, using ARM templates will allow us to create our infrastructure, and with DSC we can even configure the VMs. So, we have options for infrastructure as code with Azure as well.

We talked about how configuration management allows us to get the software we need installed and configured. However, in recent years, another option has emerged, and that's containerization. Containerization offers value to developers and operations. It allows a developer to bundle the code with supporting libraries and any files that comprise the desired version of Linux, and it will share the kernel of the OS running the container. Modern containers use layers, which are kind of like a set of file system dfs allowing the reuse of common layers and this makes containers a fairly lightweight way to package up and ensure that our code runs the same way in every environment. For operations, deployment of any containerized app becomes standardized because once a container orchestration method is in place, all the containers will be deployed using that same process.

The most popular container engine is currently Docker. Azure supports Docker in two ways. You can set up your own Linux VMs and install Docker on them yourself, or you can use Azure container services, which is a platform as a service, or PAS, that hides the infrastructure requirements. With Docker you can run just about anything you'd run on Linux, and that includes dot Net Core. Though Windows containers are in development, so keep looking for that.

Let's move on to talk about auto-scaling since it needs to be covered for the exam. In addition to manually scaling a VM up or down, you can use Azure's auto-scaler to scale out as needed. And that means increasing or decreasing the number of active instances in a service, which is also referred to as horizontal scaling. One restriction is that the VMs in the auto-scale pool must all be the same size. As an example, they'll all have to be A1s or D2s. Whatever it is, they just have to be the same. And you can auto-scale in a number of ways. First, you can scale based on something like date time. So, if you know that your application has heavy traffic Monday between 8:00 and 10:00am, you may want to ensure that you have more capacity to deal with the demand. Also, another option would be to dynamically scale based on CPU load. And that allows us to set limits based on the CPU load and determine when to scale up and down. Also, we can scale based on the number of messages in an Azure storage queue.

In order to use auto-scaling, you need to be using an availability set. Availability sets have two functions. First, they facilitate auto-scaling. The second is that they allow us a way to ensure maximum up time for replicated virtual machines. Let's assume that you have seven instances of a single application running behind a load balancer. One thing you'll want to ensure is that if any updates happen to the VMs, operating system updates, et cetera, that they don't happen simultaneously to all of the machines. We talked about update demands earlier, but since it's important, it was worth rementioning here. It's worth noting that the exam currently covers the older method for things such as scaling, so that's what we're covering here. However, there are new ways of scaling VMs with a VM scale set, so keep that in mind.

Okay, let's wrap up the lesson here. In our next lesson we'll be covering virtual private networks, so if you're ready to keep learning, then let's get started with the next lesson.

About the Author

Students31476
Courses29
Learning paths16

Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.

When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.