This course shows you how to use Azure's Cloud Service platform service offering.
By the end of this course, you'll have gained a firm understanding of the key components that comprise the Azure Cloud Service platform. Ideally, you will achieve the following learning objectives:
- How to design and develop an Azure Cloud Service.
- How to configure and deploy a cloud service.
- How to monitor and debug a cloud service.
This course is intended for individuals who wish to pursue the Azure 70-532 certification.
You should have work experience with Azure and general cloud computing knowledge.
This Course Includes
- 1 hour and 2 minutes of high-definition video.
- Expert-led instruction and exploration of important concepts surrounding Azure cloud services.
What You Will Learn
- A general overview of Azure cloud services.
- How to design, deploy, and manage cloud services.
- How to monitor your cloud services and debug them.
Hello and welcome back. Let's now take a look at how we're able to create cloud-based applications designed to run as part of an Azure cloud service.
In this section, we'll review all the areas needed to create our own cloud services. We'll look at how we're able to install and configure the SDK on our machine to enable us to develop a new web or worker role. We'll look at how we're able to design these cloud services to be resilient in the face of increased demand and failure. Finally, we'll also look at how we can perform certain operations at startup, so that we're able to configure our application environment, such as installing dependencies on the virtual machine required by our application.
Azure SDK is the local tool which provides us with all of the resources we need to locally design and develop cloud services. It provides us with the project templates, which greatly simplify the development process and a .net SKD to interact with the cloud servers. It contains a useful emulator, which can be used to run the cloud services locally, and, finally, it contains VS extensions to configure, package and publish a cloud service, all from within Visual Studio.
Azure SDK is available directly from Microsoft through the Azure downloads, as well as by direct links that are also available. It is distributed through the web platform installer and can be found by searching for Azure SDK.
As part of the Azure cloud services, there are two distinct types of work roles that we can run, web roles and worker roles. A web role allows us to deploy applications that run under IIS such as web applications written using ASP.Net. Alternatively, we can also run long-running applications using worker roles. Worker roles allow us to run computer-intensive applications in the cloud, potentially as background tasks for the web role.
Let's quickly again an overview of the different logical components of the cloud servers. Firstly, there's the cloud service itself. This acts as a logical container for a set of individual services. Each service is known as a role. For example, a service that manages incoming orders might be a role. Another that hosts a user-facing web application might be another. Each role has instances underneath them. An instance is a running cloud service application in a stand-alone virtual machine. A role might scale from one to hundreds of instances.
Now let's see a demo of how we can create a new cloud service project in Virtual Studio, before adding virtual web role and the worker role to the solution. We'll start by creating a new project within Virtual Studio by going to File, New Project, or clicking New Project on the Start page. We'll then navigate to the C# section and the Cloud subsection and choose Azure Cloud Service project type. We can choose to name it as we wish, and clicking OK will create the project. We're now given the opportunity to add new web roles or worker roles. By selecting the type and pressing the right arrow, we can build up an overview of the core components of application. We'll leave adding any projects for the moment and add them independently in the next stage.
We now have an empty cloud service. Notice the number of configuration files, and with cscfg and the service definition file that ends with csdef. Let's now add a new web role to the project. We right-click the project and add new web role project. This then provides us with the standard asp.net project creation dialog and enables us to customize the web project exactly as we would any other asp.net project. Our web project has now been created. Notice in the solution explorer, we have a web role project, which is a standard asp.net project and the web role in the roles node of the cloud service.
Let's now create a worker role. You'll see now that we have a new workerrole1 cs project, and the worker role has been added to the roles mode in the cloud service. A worker role looks very much like a standard Windows service. They have several similar methods. For example, we have the Run method, an OnStart method, an OnStop method and the RunAsync method. These will get called by the cloud servers when the appropriate events happen within the application life cycle. And this concludes the demo.
Let's quickly recap on two of the core files that our cloud service project contains. The first file is called a csdef file. There's one of these per cloud service. It contains the definition of the service itself, i.e., the list of roles within the service, irrespective of environment settings, etc. It also contains details on the roles themselves, including the virtual machine size to use, as well as the configuration setting keys. The second file is called the cs config file, cscfg file. It exists once per environment, once per local deployment using the Azure emulator, and once per cloud deployment. Each file contains details such as the number of instances of each role, as well as the configuration values for each role. So you might have your worker role point to local SQL database when running in local mode, but use a publicly accessible SQL database when running in the cloud.
Just as when designing and developing web applications, we need to ensure that the applications we write build upon the foundations which the scalable and fault tolerant as your platform provides. In order to manage this, we should apply a number of programming patterns, many of which also apply when developing Azure web applications. When I run through a number of these common patterns, it should be used to help ensure our cloud services are both scalable and fault tolerant.
Prefer static over dynamic content where possible. By using static content, the server has to do less work, ensuring that we're then able to handle more requests per second than we would otherwise manage. This might mean pre-generating content that we know to be fixed or slowly changing. Serving static content generally means lower CPU utilization and fewer round trips to datastores such as SQL server. Conversely, repeatedly generating the same content dynamically on a pre-request basis is inefficient and reduces the quality of the end-user experience through lower response times.
Prefer storing frequently accessed data in a cache to avoid having to hit the database. Rather than infrequently communicating with the database for frequently accessed data, we cache it and ensure the we limit the pressure on the database. You might use an in-memory cache in code for small data sets or an external cache, such as Redis cache. Both will relieve pressure on the true data source. For example, the SQL database, and will provide improved performance.
To check if those services are available for communication, we should be able to ping unknown, healthy endpoints so that we can understand whether our service is available or not. For example, you may want expose an http or tcp endpoint that can be queried to return the status of the service. This might be a simple http service running within the context or application that simply returns an http 200 OK message or similar.
As close to remote service can fail during an operation, we need to provide a means to recover from this. Compensating transactions allow us to revert changes made before a subsequent operation failed. In other words, provide a means of rolling back the modification of data to the system.
For example, let's say our application want to atomically create an order and take payment. If the attempt to take payment fails, we should create a new operation to remove the order, and we can create a separate path for fast reads and slow writes. When writing data into a database, it typically needs to go through several stages of validation and architectural layers before finally being inserted. Separating reads and writes ensures that the reads are able to be accessed quickly without needing to go through all of the same layers as writes. This is known as the Command Query Responsibility Segregation pattern, or CQRS.
As worker roles allow us to create scalable background processing solutions, we can use the same principles as when developing web-job-based solutions. Many of the same lessons learned in designing web jobs can be reused with worker roles. By making all consumers listen on the same channel, we can scale out the number of workers and continue to remove messages from the queue. This is known as the competing consumers pattern. A common example of this is to use a queue to store work items. Multiple workers then listen to that queue, take the messages off the queue in an equally distributed fashion. By creating priority queues, we're able to process more important work before other work, known as the priority queue pattern.
In this example, we have two queues to handle different types of work. We may choose to have more workers dealing with different queues or have all workers listen to all queues simply by having a separate queue for high-priority work items to ensure that those messages will be dealt with more quickly than those on the standard queue.
We may experience sporadic usage of our application. By putting messages into a queue, we're able to smooth this usage over a period of time where we have a lighter workload. In other words, we might have periods of time when we have too much demand to deal with. In this case, we can use a queue to store work that needs to be done when resources become available. Later on, when demand has slowed down, we can catch up with outstanding work from the earlier busy period. This is known as the queue-based load leveling pattern.
Whilst we typically try and parallelize work across multiple workers, it's inevitable that we will sometimes need a certain degree of coordination across all workers in order to ensure that tasks happen in the correct order or not at the same time as each other. In this case, by electing a leader worker, we're able to coordinate specific actions through that leader.
When dealing with work flows requiring multiple stages to work together one after another, we can create external services whose responsibility it is to schedule task to operate in certain agents, whilst a supervisor monitors for any failed steps in the work flow. This is known as the scheduler-agent supervisor pattern.
As in all web applications, cloud services are subject to failure, as are the services that support the application. In order to deal with that, it's important to retry failed operations with a backoff in order to handle temporary failures. This means that we should retry an operation that may fail due to, for example, an outage with a progressively longer time period between attempts, before eventually giving up. As in all web applications, we can use the transient fault-handling application block to make this easier for us.
Before our application's executed within cloud servers, we can run one or more scripts to modify the execution environment and ensure any dependencies are preconfigured before our application executes. These scripts are simple Windows batch scripts, which allow us to execute anything we want, either under the same permissions of the application or under elevated permissions. This allows us to perform tasks such as setting environment variables through to configuring file settings of the web server.
Let's now look at an example, startup task. Let's suppose our application uses a third-party library that reads its license information from a setting in the registry. We can use a startup task to create a custom registry entry on the virtual machine that the cloud service runs during initialization. Note that we need to add this file to the cloud service project, as well as ensuring it gets copied to the output as an artifact of the build, so that it gets packaged up into the cloud service.
To execute the startup task, we need to specify the task with the service definition file for the specific role. In this example, we specify that we run the ad licensing command file as an administrator. Anything with a commandLine attribute is used to run the task. This means we can specify any Windows application or batch script, or since we're able to post arguments as well, we're able to execute other scripts, as long as we have the run time installed in the worker role, such as a Python or PowerShell script.
There are two distinct types of startup tasks, simple and asynchronous tasks. Simple tasks execute one after another, in the order they specified in the service definition file. If any task fails, then the role is recycled, and the tasks are executed in the order again. Asynchronous tasks are instead executed in a fire and forget mode. We set them up, but then they're left to run whilst the cloud server starts up. We can further split asynchronous tasks into four random background jobs. A foreground job will prevent the role from recycling whilst it's still executing.
Stay tuned. In the next section, we'll demonstrate how we can configure cloud services.
About the Author
Isaac has been using Microsoft Azure for several years now, working across the various aspects of the service for a variety of customers and systems. He’s a Microsoft MVP and a Microsoft Azure Insider, as well as a proponent of functional programming, in particular F#. As a software developer by trade, he’s a big fan of platform services that allow developers to focus on delivering business value.