1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Design and Implement Cloud Services for Azure 70-532 Certification

Configure Cloud Services

Contents

keyboard_tab
Introduction
1
Overview
PREVIEW3m 17s
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 1m
Students173

Description

Course Description

This course shows you how to use Azure's Cloud Service platform service offering.

Course Objectives

By the end of this course, you'll have gained a firm understanding of the key components that comprise the Azure Cloud Service platform. Ideally, you will achieve the following learning objectives:

  • How to design and develop an Azure Cloud Service. 
  • How to configure and deploy a cloud service. 
  • How to monitor and debug a cloud service. 

Intended Audience

This course is intended for individuals who wish to pursue the Azure 70-532 certification.

Prerequisites

You should have work experience with Azure and general cloud computing knowledge.

This Course Includes

  • 1 hour and 2 minutes of high-definition video.
  • Expert-led instruction and exploration of important concepts surrounding Azure cloud services.

What You Will Learn

  • A general overview of Azure cloud services.
  • How to design, deploy, and manage cloud services.
  • How to monitor your cloud services and debug them. 

Transcript

Hello and welcome back. In this section, we'll see what options we have in order to configure a cloud service.

There are a number of elements involved in configuring the cloud service. Firstly, in this section, we'll see how we're able to change the virtual machine size of the machines in our cloud service, as well as how we're able to change the number of instance for a role, either manually or through the auto-scaling facilities provided by Azure. We'll look at how we're able to configure networking for a given cloud service, and how we can add local storage space to the cloud service instances. Next, we'll move on to using a single web role to host multiple web applications, as well as how to configure custom domains for any web application we host. And finally, we'll look at some of the caching options available to us within the cloud services.

A cloud service can be scaled in one of two ways. Firstly, we can scale it vertically. In other words, get a bigger box, more CPU cores, more RAM, et cetera, et cetera. However, this might not always be feasible or cost effective. It's a simple way to increase throughput without having to worry about concurrency. Secondly, we could scale horizontally. In other words, get more machines, each node is identical to the next, and allows us to increase the size of the cluster to get more work done. We also benefit from full tolerance with this model. If one node goes down, the system can carry on. You might find that mixing and matching these approaches, rather than using one or the other, works best. For example, an application may need at least a minimum amount of memory to function effectively, but you still might want to scale out the service to allow multiple instance to run simultaneously.

Let's take a look in this demo, and we'll see how we can specify the size of a cloud service project using Visual Studio. Here we have the same cloud service solution we created earlier. We're now going to navigate to the properties of the role we want to size up. In this case, we'll size up the worker role. We can size the role in the cloud service in one of two ways. Firstly, we can change the size of the physical virtual machine that is used to house our service. This is known as the VM size. These map to the VM size you'll see in the Azure portal for standard VMs. From here, we're able to select the appropriate VM size. I'm going to select the extra large size. When we deploy this service in the future, Azure will now use the extra large VM size to host the cloud service. We can also set the number of instances. In this case, I'm gonna change the number of instances to five. The next time we deploy into Azure, we'll end up with five instances of this role. If we're now going to the cloud service definition file, we'll see that the VM size for the worker role is extra large. And if we navigate to the surface configuration file, we'll see that we now have five instances. And this concludes the demo.

As an application comes under increasing load, we want to ensure that it remains responsive. In order to achieve this, we're able to use auto scaling to automatically add new Azure instances as the application runs and is judged to be needing more instances. Auto scaling allows us to respond to user demand in a cost-effective manner. We may start out over single instance of a VM, but as demand increases, we scale up to a larger set of workers. Then as demand tails off, we scale back down again.

There are three main metrics that we can use to trigger and auto scale event. Firstly, we can look at the metrics and the instances themselves. For example, if the CUP load across all nodes gets too high or low, we can instruct Azure to resize the cluster of nodes as required. Secondly, we can use the length of a queue. It's common to use queues as a means of sending work to a cluster. If you record a competing consumer's pattern. We can therefore use the queue length as a rough indicator of whether the current service is able to process work quickly enough or not. If the queue length grows too large, we can scale up the cluster size. Another means of auto scaling is based on time. The applications we develop may not see usage 24 hours per day, and instead may only be used between 9 a.m. and 5 p.m. By auto scaling based on time, we're able to scale out the number of instances only during the times when the application is actually in use.

In this demo, we'll see how to scale a cloud service using the Azure Portal. Let's start with the cloud service we already deployed into Azure, taken from our earlier solution. We can see we have two roles, and one instance for each of the roles. Let's scale up the worker role to two nodes manually. We enter the roles blade and select scale. Note that it's not possible within the portal to scale up a VM size of a cloud service. That can only be done at deployment time. If you we want to scale up the number of instance, however, that's easy. We simply drag the slider to choose how many instances we want, and we'll select two in this case, and then we'll hit save. And after a short delay, we'll see the new instance showing up in the portal. We can now see that the new instance of the worker role has been provisioned by Azure.

Let's now move on and start looking at the auto scaler in Azure. In the same scale settings blade, we'll choose to scale by schedule and performance. Previously, we picked an instance count that I enter manually. We first create a profile that specified the times that a set of rules will be active. The default profile shows as always on, but we change this to a recurrence, for example, everyday starting at 9 a.m. Alternatively, we can specify specific dates. We also set upper and lower bounds here for the profile. Once we have defined the upper and lower bounds, and the schedule of the profile, we'll now define some rules that fit the profile.

For example, we might choose to check if the CPU is over 70% on average, over a period of 30 minutes. We then have to choose the action to take if the rule passes. For example, we may choose to increase the count by one. In other words, increase the number of nodes by one. We can choose a number of metrics, and we can choose a storage or service bus queue, and use the queue length as the metrics from which to increase or decrease the number of nodes. And this concludes the demo.

Let's talk a little about networks and endpoints on a cloud service. Every cloud service has a number of instances, as we know. All of them reside behind the load balancer, which lets through traffic, and can share the load of the inbound network traffic across all nodes. The first type of endpoint is the input endpoint whereby the load balancer randomly decides which cloud service instance will receive the traffic. We also have the option of using an instance endpoint, which allows us to direct the traffic to a specified cloud service instance. Finally, we also have the internal endpoint, which allows us to communicate between cloud service instances without having to go through the public internet.

By using SSL, we're able to create a secure communication channel between the user's web browser and the web role, which encrypts all traffic between them. This ensures that no external third parties are able to interfere with the data flow between the two endpoints. The user is also able to verify that the web app they are connecting to is the correct web app, and they're not being redirected to a malicious third party site. SSL is a requirement for many applications, which deal with sensitive data such as password or credit card information. In this section, we'll see how we're able to configure SSL for an Azure web role.

Let's walk through this, step by step, and see how to use SSL with a cloud service. There are three steps. Firstly, in order to configure a role for SSL, we first need to acquire what is known as a certificate. Preferably, this should be done through one of the major certificate vendors. However, for the purposes of this walkthrough, we can use a self-signed certificate, which is fine for testing. Next, we're gonna upload the certificate to the cloud service. And then thirdly, we configure the cloud service solution locally with the certificate, and then deploy it.

We'll start with creating the certificate. To do this, open a developer command prompt for your version of Visual Studio as an administrator, and execute the MakeCert command, as we can see here, substituting the name of the CN for the name of the .cer file with values appropriate to your application. Note that on Windows 10, you'll need to install Windows 10 SDK in order to install MakeCert on your machine. This generates the certificate in .cer format. Unfortunately, Azure expects certificates in the PFX format, so we need to convert it. To do that, we'll load up the certificate manager, and locate the generated certificate in the personal certificate store. We right click, select all tasks, and then choose export. We now need to proceed for the export wizard. The first step is to ensure we export the private key. Then ensure we're exporting a PFX file as required by Azure, and finally, supply a password and output location.

Now that we've created the certificate, we need to upload it to the Azure cloud service. We navigate to the certificates tab on the portal, and click upload in the task bar. This will prompt us for the PFX file we previously exported and the password we added to the file. We don't need to configure the web role to use this certificate. We navigate to the properties for the web role, open the certificates tab. We click add certificate, which adds a new row to the grid, from which we can then name the certificate. By clicking on the ellipsis and the thumb print, we can select the certificate.

Next, we need to create new SSL endpoint, which uses the SSL certificate we have configured going to the endpoints tab in the web role properties. We click add endpoint, and supply a name for the endpoint. We change the protocol to HTTPS, and change the public port to 443. Then in the certificate drop-down, we choose a certificate we previously configured.

Let's now look at traffic filtering rules. There are two types, network traffic rules and access controllers. Network traffic rules allow us to only allow traffic between certain endpoints within roles. We can see here how we have two internal endpoints to find for the web worker roles. We're then able to use network traffic rules to specify that traffic should only flow between certain endpoint In order to do this, we had a network traffic rule section to the service definition .csdef file, which is then built up of the traffic rules. In this case, we specify that we're only to allow traffic to the worker role 2's endpoint. If the source matches one of the rules listed, in this case, the traffic is only allowed if it originates in worker role 1. We then have a second traffic rule, in this case, determining what happens to the traffic destined for worker role 1. And the rule states that we allow all traffic coming from the worker roles.

If we want to limit traffic to a cloud service, but were coming from a different source, such as a virtual machine or an on-premise application, then we're unable to use network traffic rules. Instead, we're able to use access control lists, which allow us to create specific rules which limit traffic. In this example, we created a rule which only allows access to the cloud service if it originates from an IP specified in the 70.181.131.0/28 subnet, which is designed to act as the IP range for a specific office location. If the IP address is not within the subnet, then the traffic never reaches the cloud service.

We can easily add a cloud service to an existing virtual network. We can add a cloud service to a virtual network by modifying the service configuration for the cloud deployment. Assuming we already have a virtual network created called compitinternalVNet, we can then assign the worker role to be deployed into a subnet within that virtual network, in this case, into a subnet called WorkerApplications.

We might also want to reserve the IP over cloud service, for example, to ensure that it's not changed when the service is deprovisioned or reprovisioned. Before we're able to assign a reserved IP to a cloud service, we first need to create an IP. This can be achieved by using the new Azure reserved IP cmdlet specifying a location and a name. We can then modify the service configuration file to add the reserved IP to the network configuration, supplying the name of the reserved IP address we previously created. We can also specify public IP address for a cloud service, as opposed to an Azure allocated one. In this example, we used the public IP address called pubIP, and assign it as the network address of our cloud service, replacing the reserved IP we saw previously.

The cloud service model allows us to create storage area on the host's VM disk, which allows us to persist data locally. This data is then accessible across restarts of the VM or during fresh deployments. If the cloud service is migrated to a new virtual machine however, then the data is lost. This is useful for temporary data that can be rebuilt if needed, such as a disk space, local cache, or similar. Let's see how to first create and then use local storage.

To create a local storage space in the role, we need to access the properties for that role, and go into the local storage section. From here, we're able to create a named container along with the a maximum storage size. You can also choose to clear out the storage, whenever we've restart the role, by ticking the clean on role recycle option. In this demo, we'll see how to create and accessing local storage using C#.

In this demo, we'll see how to create and access local storage in C#. Here, I have the same cloud service I created earlier. In order to create local storage, I simply navigate to the settings of the role, and add an entry for the local storage. Note that depending on the VM size, the amount of local storage you may use could be limited. Now in code, we'll access this local storage. First, we need to retrieve the path to the local storage from the role environment class. We use to get local resource method to access the name storage path. And from then on, we're able to access files in it as we would any other file in the local file system. Notice here, we're using the file class to write and read text, just as we would with any other file. And this concludes the demo.

We may sometimes want to host multiple web applications on a single web role in order to save costs. We can do this by adding a new web application to the web role. We first create a new web application and the solution without creating an associated web role for it. We then modify the service definition to create a new web application within our web role, specifying the physical directory of the web application. In this case, the new web application created was called AdminApplication. We also modify the two web applications to use the same endpoint, but are differentiated based on the host header, which differs per subdomain. In this case, the public web application is available through the public.cloudapplication.com, while the admin web application is available through the admin.cloudapplication.com domain name.

When hosting a public-facing web application, we'll almost always want to use a public-facing hostname, rather than jut the Azure allocated name, which all end in cloudapp.net. There are two forms of custom domains you can register, A Records and CNAME Records. CNAME Record maps a specific domain to another domain, for example, from www.webapp.com to webapp.cloudapp.net. Because CNAMEs redirect to another domain, you do not have to worry about IP address changes. A Records allow us to specify an entire domain, including wild cards identified with a star or asterisk to an IP address. With A Records, you can therefore redirect all traffic, including subdomains. For example, mail.webapp.com or login.webapp.com to a cloud service with a single rule.

By using the dashboard of the cloud service, we're able to retrieve the IP address, which we can then use to create an A Records with our DNS provider. When using an A Records, it's important that the destination IP address doesn't change, which means we need to ensure that we don't delete the deployment in the production slot of the deployment, otherwise, a new IP address will be issued. In order to prevent this from happening, best practice when using A Record is to either create a reserved IP address, which can then be used through the lifetime of the application, or simply prefer CNAME Records.

Let's round up this section by briefly looking at caching. Azure cloud services have a built-in cache service called Managed Cache. It allows us to cache any serializable data to the cache, such as XML data, binary data, or any CLR.net objects. The cache comes in two forms, as a co-located cache, which means a service lives in the same machines as your existing cloud services, sharing resources with them, or dedicated, which means you create a number of role instances dedicated to the cache. It comes with three pricing tiers, for sizes from 120 megabytes, up to 150 gigabytes. However, you should note that the Azure cache is being decommissioned at the end of 2016. The recommended alternative is to used the managed Redis Cache Service now offered in the Azure Portal.

State tuned in the next section. We'll discuss how to deploy a cloud service.

About the Author

Isaac has been using Microsoft Azure for several years now, working across the various aspects of the service for a variety of customers and systems. He’s a Microsoft MVP and a Microsoft Azure Insider, as well as a proponent of functional programming, in particular F#. As a software developer by trade, he’s a big fan of platform services that allow developers to focus on delivering business value.