1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Design an Advanced Application for Azure 70-534 Certification

Long Running Applications

Contents

keyboard_tab
Introduction
Summary
9
Summary2m 20s
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 14m
Students112

Description

This course is focused on the portion of the Azure 70-534 certification exam that covers designing an advanced application. You will learn how to create compute-intensive and long-running applications, select the appropriate storage option, and integrate Azure services in a solution.

Transcript

Welcome back. In this lesson, we'll be talking about Long Running Applications. We'll be talking about Worker roles for scalable processing and stateless components to accommodate scale. Let's start by talking about stateless components to accommodate scale.

A stateless component is one that expects its state to be determined by input and not some data stored on the component itself. As an example, a web server that stores user sessions locally is an example of a stateful application. This becomes a problem when you need a scale. Now imagine you have a Windows server running IIS and hosting your web application, and the app requires users to log in before they can use the app. With a single server handling the demand, this really isn't a problem. However, if you need to have multiple servers handling requests from behind the load balancer and the session is stored locally to the web server, then a user authenticates the one server and the next request is sent to another server from the load balancer, that user is going to have to log in and authenticate again.

Since there is so many legacy frameworks out there that are stateful, most Cloud load balancers allow for session affinity, which is also called sticky sessions. And what session affinity does is ensure that the request will always be sent to the same server. There are other options beside session affinity to enable stateful applications to work in the Cloud, such as centralized session stores. And this involves having a session stored mechanism such as Redis or Memcached, and all of your web servers look to it as its session database. Now, this is the better mechanism than session affinity because traffic can be more evenly distributed.

However, the Cloud way is to go stateless. The Cloud has been one of the largest driving forces toward stateless. The use of things such as JSON Web Tokens has enabled applications to get rid of sessions and go stateless. JSON Web Tokens are outside of the scope of this course, however, what they do is give a signed token to an authenticated user and the user passes it back in the authentication header of each request. And that token can be verified on any web server. It's basically a less verbose version of SAML, if you're familiar with that. The value of stateless components is that it facilitates scaling out rather easily because all that tends to be required is adding a new node to the load balancer. So, stateless helped with long running tasks by allowing you to add capacity to help handle the work load.

Imagine you have an app that allows users to upload images and they can get back information about that image. Let's say it's things such as facial detection and product recognition, as well as any text in the image. In this scenario the user uploads an image through the website and then that image is stored as a Blob. And then, some information is placed in a Queue containing things like the image path, the user that uploaded it and any additional request info that may be important. And now, if you have a pool of stateless Workers that can grab the next thing in the queue, process the image, and then remove the item from the queue on completion, you can add and remove workers form the pool as needed. If you were to try and process that image on a web server itself and something should go wrong, let's say that the VM went offline, that request would be lost. So, stateless allows for better scalability and when we scale out, we can handle whatever traffic we're sent.

In our example, there were web servers responsible for handling placing the data in the queue. However, really this could be anything. And a common source for adding work to a queue is the Azure Scheduler, which is a service that as the name suggests allows us to schedule tasks. The ability to run scheduled tasks in a Cloud native way saves the headache of setting up virtual machines and using a local scheduler. With this you can do things such as run a task of your creation that, maybe, pulls comments out of your product's social media feeds and then puts them into a queue for processing. These sorts of long running, periodic tasks are commonplace and made easier when you don't need to handle the management of the scheduling infrastructure.

Okay, we briefly mentioned long running tasks in the context of Azure Scheduler. However, there are other options. We can use Azure Batch, which we covered in a previous lesson. Though, there's also Cloud services, which is currently part of the exam objectives. Cloud services offers a middle ground between virtual machines and web applications. And because this is more of a platform as a service than infrastructure as a service, you don't need to worry about things such as operating system patches. Cloud services provides two distinct types of work loads, Web roles and Worker roles. A Web role allows us to deploy applications that run under IIS, such as web applications written using ASP.Net. And alternatively, we can run long running applications using Worker roles. Worker roles allow us to run compute and intensive applications in the Cloud potentially as background tasks for our Web role. In our previous example of image processing, we could use a Worker role to actually handle the processing of an image uploaded by our user through our web application. Let's check out the different logical components of Cloud service.

First, we have the Cloud service itself. This is a logical container for a set of individual services. Each service is known as a Role. For example, a service that manages incoming orders might be a Role. Another service that hosts a user facing web application might be another. Each Role has Instances underneath them. An Instance is a running Cloud service application in a stand alone virtual machine. A Role might scale from one to hundreds of Instances. So, every Cloud service has a number of Instances, and all of them reside behind a load balancer which distributes traffic across the nodes. To handle different scenarios, we have three types of endpoints. The first type of endpoint is the Input endpoint where the load balancer randomly decides which Cloud service will receive the traffic. We also have an option to use an Instance endpoint, which allows us to direct traffic to a specific Cloud service Instance. And the third is the Internal endpoint, which allows us to communicate between Cloud service Instances without having to go through the public internet.

Cloud services allow us to easily scale out our Web and Worker roles and since it's the Worker roles that allow us to handle long running tasks that's what we'll be focusing on. We can use Auto-scaling to increase the amount of Worker roles based on the demand. And because we can easily scale out as we need to, Worker roles make for a very easy way to handle long running tasks. In the scenario from earlier, I mentioned that users would upload images and we'd add a task to the Queue to process the images. And, we'd have a pool of stateless Workers that would handle the processing. The word Worker in that context was just meant to generically denote some form of processor. However, a Worker role would actually fit into that category quite well.

Okay, let's wrap up this lesson. Long running processes are often required, and Azure has some options to facilitate. However, we may need to re-engineer our code to be stateless in order to really work well. Azure has the scheduler to handle reoccurring tasks. Azure Batch to handle tasks that are long running parallel computing tasks. And then Worker roles from Cloud services to serve as the compute for tasks in a less managed way than the previous two services offer.

Alright, we've talked about Availability throughout the learning path at different times. We're gonna cover it again related to specific services throughout the course, however, I want to dedicate a few lessons to Availability and Scalability. I want to cover these at a generic level. I know this is a learning path that's specific to Azure, however these concepts such as Availability and Scalability are cross ecosystem issues. So, if these topics are already something you have a strong grasp of, you can feel free to skip through these lessons. Otherwise, let's check out Availability in our next lesson.

About the Author

Students31532
Courses29
Learning paths16

Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.

When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.