1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Design and Implement Web Apps for Azure 70-532 Certification

Design and Implement Applications for Scale and Resilience

Start course
Overview
DifficultyIntermediate
Duration1h 39m
Students658

Description

Course Description

This course will show you how to design and create web applications using the Azure App Service. 

Course Objectives

By the end of this course, you'll have gained a firm understanding of the key components that comprise the Azure App Service. Ideally, you will achieve the following learning objectives:

  • A solid understanding of the foundations of Azure web app development and deployment. 
  • The use cases of Azure Web Jobs and how to implement them. 
  • How to design and scale your applications and achieve consistent resiliency. 

Intended Audience

This course is intended for individuals who wish to pursue the Azure 70-532 certification.

Prerequisites

You should have work experience with Azure and general cloud computing knowledge.

This Course Includes

  • 1 hour and 35 minutes of high-definition video.
  • Expert-led instruction and exploration of important concepts surrounding Azure Web Apps.

What You Will Learn

  • How to deploy and configure Azure web apps.
  • How to implement Azure Web Jobs.
  • Scaling and resilience with Azure web apps. 

Transcript

Hello, and welcome back. In this section, we're gonna be discovering how we can design and implement applications for scale and resilience. While Azure web apps provide the means to scale up and out, and enable resilience, applications must be built with scaling and resilience in mind. Infrastructure alone is not sufficient to enable these features. For example, a web app that relies on session state is harder to scale out. If the web app clients are tied to a specific instance of the applications, scaling out to more instances might do little or nothing to increase capacity. Similarly, an application report error handling won't do well in resiliency scenarios.

In this section, we will discuss the key topics associated with the design and implementing of applications that are built for scale and resilience. We will discuss design patterns as well as transient fault handling and responding to throttling. We'll also cover the topic of Application Request Routing, which is a key topic on the certification path.

There are a number of cloud design patterns, which can be found on the Microsoft Patterns and Practices website, however we will focus on the three common web app patterns that are a focus of this certification curriculum. These three patterns are throttling, retry, and circuit breaker. An example of throttling is when the server returns a HTTP 503 Service Unavailable response. This is the server indicating to the client that the server is overloaded. The pattern is to respond to increase load by restricting access to limited resources, thus preserving web app functionality for some users rather than the service failing outright, or functioning sporadically for all users. For example, we may want to preserve functionality for users logged in, or degrading the service for anonymous users.

To demonstrate, let's have a look at this simple diagram. We have an anonymous user as well as a logged in user both making a request to the web app. In this case, due to a high load, the throttling logic is active, and the application can choose to prioritize logged in users to ensure these known users receive the expected response. The anonymous users receive a response only if there are sufficient resources. In this case, there aren't any, and the anonymous user receives a HTTP 503 response.

The throttling pattern allows us to define a soft limit, a limit below the maximum system capacity. Once the soft limit is reached, the throttling rules are activated. The web app may simply start rejecting requests, or degrade functionality. One strategy is to make the more resource-intensive parts of the system unavailable. Using an internet forum as an example, the application might go into read-only mode allowing users to access content but not to submit new material. This pattern can complement auto-scaling since it takes time to create new instances when scaling out.

A throttling pattern can be used to handle resource limitations gracefully until the new instances are available to take the load. In the case the application itself relies on the services, a part of this pattern is ensuring that the application is aware of the throttling behavior of these underlying services in order to handle their throttling signals appropriately. For example, if the application relies on an external SQL database, the application should know how to handle common scenarios where the SQL database itself may return errors or timeout requests when the load is too high.

The retry pattern, as the name suggests, is a pattern of retrying failed operations. Typically, the retry attempts will be transparent to the client meaning that the client calling the application will not be aware that the application is handling retries to underlying services. A common example is handling transient SQL database errors whether they be a failure to connect, or an error when executing statements with deadlocks being a fairly common example.

The retry pattern is particularly important in cloud hosting scenarios where shared infrastructure is typically subject to greater variances in performance and transient faults than dedicated infrastructure. The key parts of the retry pattern is implementing a smart retry strategy. Nonlinear back-off times with limited retry attempts are an example though the best strategy typically depends on an understanding of the underlying service.

Another part of the retry pattern is having your application return specific error codes to your clients. As a service provider, this increases your application scalability and resiliency. This is because the error codes can enable the client to retry the operation in the event of a transient fault.

The circuit breaker pattern prevents attempted operations when it is known that those operations are likely to fail. Circuit breaker logic can come into effect where certain criteria are met. For example, when a certain number of consecutive errors occur, or when a certain number of failures occurs within a certain period of time. Circuit breaker logic fails the operation quickly without actually attempting the operation. For example, instead of sending the SQL statement to an SQL server, the logic would return an error without attempting to connect to the database, or sending the statement.

Circuit breaker will eventually be deactivated when a certain condition was met. For example, when enough time has passed since the last failed request and operations will again be attempted normally. In a closed state, the circuit breaker logic will pass through the request normally, however if the circuit breaker detects an issue, it will move into an open state returning errors instead of invoking the service.

Circuit breaker may also be in a half-open state allowing some requests but not others. Circuit breaker enables efficient error handling instead of performing the potentially expensive and time-consuming task when attempting the operation. The application avoids this, and instead fails fast and efficiently. This reduces the error handling cost, and also reduces the impact on their underlying services by not sending down the requests, which are likely to fail anyway.

The pattern can include intelligent recovery strategies such as waiting for a period of time before allowing any further operations, or by actively monitoring the underlying service, and waiting for it to become available again. As a service provider, it's beneficial to encourage clients to implement their own circuit breaker logic as this helps prevent request floods from clients during outages thereby increasing your application's resiliency and scalability.

When implementing transient fault handling and responding to throttling, it's important to know that existing client libraries and the Transient Fault Handling Application Block exists to avoid having to write the fault handling logic yourself. If you're accessing Azure SQL Database or Storage from a .NET client, these libraries already provide all of the logic required to identify and handle transient failures including throttling. When consuming other services from a .NET client, the Transient Fault Handling Application Block provides a framework for defining the fault handling logic, retry policies, and can be used as a proxy for handling retries.

The Transient Fault Handling Application Block consists of two components, which can be sourced from NuGet. Firstly, there is the Transient Fault Handling Application Block itself, and secondly, where available, the integration package that contains logic for the specific service you are using such as Azure SQL Server. Let's see how the Transient Fault Handling Application Block and SQL works.

Let's have a look at some sample code that demonstrates the use of Transient Fault Handling Application Block when using SQL Server, and we'll also look at a specific retry policy when accessing blob storage. Let's stop for the key parts of this code. Firstly, we set up the retry strategy. In this case, we specify the retry count and retry interval for use in a fixed interval strategy. We provide the strategy to the retry manager, which we set as the default manager. We create what's called a policy, and supply that to the reliable SQL connection object, which is the proxy for our SQL database access, and we use our fixed interval retry policy when required.

Setting up the blob client is a little simpler. As the blob client API allows us to specify retry policy directly, and Microsoft WindowsAzure Storage assemblies already include all the required components. We define our retry policy as an exponential retry with a back-off delta, and a maximum retry count, and then used to contain a client as per normal. The two blocks of code we've looked at are simple examples of building your application with resiliency in mind.

Application Request Routing or ARR is a feature of web apps that enables what's called sticky sessions between the client and the first instance that the client connects to. Typically, the cookie is given to the client that identifies the instance to all subsequent requests from the clients you go to regardless of other available instances, the current network conditions, or the load on a particular instance. This feature could be useful if the web app holds user state, state that may be difficult or expensive to move between different instances handling the client's requests, however this feature is problematic for scaling and resilience.

Firstly, introduce a statefulness to web apps. When clients become attached to a specific instance, it diminishes the effectiveness of scaling out. Another instance may be available but it won't share the load if clients continue making requests only against the first instance. Secondly, when a client becomes attached to a specific instance, and that instance fails, the client will continue to try and communicate with the failed instance instead of trying one of the other instances. These factors create a headache for resilience, scale, and availability planning. It is possible to disable ARR by adding a header to the HTTP response. In a .NET web app, this is as simple as adding this block to the web config.

This section covered topics of transient faults, and fault handling patterns. We discussed implementing and responding to throttling. We covered the Transient Fault Handling Application Block, which helps in the implementation of fault handling and retry strategies, and the related libraries that provide these features out of the box for some common Azure services. And finally, we covered Application Request Routing, and how to disable it.

This concludes the course, and we thank you for your time and attention. We hope this course has been helpful in preparing you for the Microsoft 70-532: Developing Microsoft Azure Solutions exam. The objectives we have covered are in preparation for the design and implement web app section of the certification's key competencies. This included deploying and configuring web apps, configuring diagnostics, monitoring, and analytics, implementing web jobs, and configuring web apps for scale and resilience. Lastly, we covered the topic of designing and implementing applications for scale and resilience.

We encourage you to complete the hands-on labs and practice questions to review and strengthen your knowledge of the curriculum. We also encourage you to follow the guides included in this course to set up your own web apps and web jobs, and become more familiar with Azure's rich set of features. Thank you again, and we look forward to your success in the Azure domain.

About the Author

Isaac has been using Microsoft Azure for several years now, working across the various aspects of the service for a variety of customers and systems. He’s a Microsoft MVP and a Microsoft Azure Insider, as well as a proponent of functional programming, in particular F#. As a software developer by trade, he’s a big fan of platform services that allow developers to focus on delivering business value.