Azure Compute Infrastructure
The course is part of these learning pathsSee 2 more
Microsoft Azure offers services for a wide variety of compute-related needs, including traditional compute resources like virtual machines, as well as serverless and container-based services. In this course, you will learn how to design a compute infrastructure using the appropriate Azure services.
Some of the highlights include:
- Designing highly available implementations using fault domains, update domains, availability sets, scale sets, availability zones, and multi-region deployments
- Ensuring business continuity and disaster recovery using Azure Backup, System Center DPM, and Azure Recovery Services
- Creating event-driven functions in a serverless environment using Azure Functions and Azure Log Apps
- Designing microservices-based applications using Azure Container Service, which supports Kubernetes, and Azure Service Fabric, which is Microsoft’s proprietary container orchestrator
- Deploying high-performance web applications with autoscaling using Azure App Service
- Managing and securing APIs using Azure API Management and Azure Active Directory
- Running compute-intensive jobs on clusters of servers using Azure Batch and Azure Batch AI
- Design Azure solutions using virtual machines, serverless computing, and microservices
- Design web solutions using Azure App Service
- Run compute-intensive applications using Azure Batch
- People who want to become Azure cloud architects
- People preparing for a Microsoft Azure certification exam
- General knowledge of IT architecture
If you need to deploy a web application that doesn’t have a microservices architecture, then Azure App Service Web Apps is usually the best way to do it. It’s a managed service, so you don’t have to worry about provisioning and maintaining the underlying infrastructure. It’s also very flexible because you can write your application in ASP.NET, ASP.NET Core, Java, Ruby, Node.js, PHP, or Python. Web Apps runs on Windows with IIS, but there’s also a Linux version that I’ll talk about later.
Setting up continuous integration and deployment is easy too because it’s integrated with Azure DevOps, GitHub, BitBucket, Docker Hub, and Azure Container Registry.
Another great feature for software developers and testers is deployment slots. Before you put a new version of an application into production, you’ll want to test it. With App Service, you can create a deployment slot called “testing” or “staging” and another one called “production”. Then you can test the new version of your application in the staging slot, and when you’re satisfied that it works, you can swap it with the production slot, and it will be deployed as the production version. If you discover problems after doing this, you can swap it again and the old version will be back in production. Deployment slots can really reduce the stress of upgrading your apps. This feature is only available in the Standard service tier and above, though. I’ll tell you more about the service tiers in a minute.
Although Web Apps take care of the underlying infrastructure, you do have control over how it scales. There are two ways to do this: scaling up and scaling out. Scaling up means adding more resources, such as disk space. You do that by choosing a higher App Service pricing tier. As you go up in service tiers, you can have more apps, more disk space, and more instances.
The number of instances is how you scale out. For example, in the Premium tier, you can have up to 20. To actually spin up extra instances, you can either do it manually or automatically. To do it automatically, you choose a metric, such as CPU Percentage, and set the App Service to autoscale if that metric reaches a particular threshold, such as 80%. When the average CPU percentage across all of the existing instances reaches that threshold, then App Service will add more instances. How many more is determined by the value you put here. This is a percentage, so if you set it to 25, then it will add 25% more instances when the CPU average hits 80%.
You should also set a rule that tells it to scale in when the CPU average drops below a certain level, so you aren’t wasting resources during quiet times. You can even have different rules for different levels. For example, you could tell it to scale by 25% if the CPU average reaches 60%, and by 40% when the CPU average reaches 80%.
By default, the autoscaling rules you set are always in effect, but you can run them on a schedule if you want. You can also have different rules in effect at different times.
Scaling isn’t the only way to increase a web application’s performance. Another way is to use Azure Redis Cache, which stores recently accessed data in memory. This is especially helpful for caching database records that get accessed multiple times. Another example is caching a user’s session information instead of always having to retrieve it from a cookie in the user’s browser. Getting data from an in-memory cache can significantly speed up web applications.
Another way to speed things up is to use a Content Delivery Network (or CDN). When you put static content from your website into a CDN, users can retrieve it from a nearby edge server rather than from your website. This also reduces the load on your web app. If your entire website is static, then you can serve it from a CDN without even having to deploy any compute resources, such as a web app or VM.
A CDN is especially useful for reducing latency in geographic locations far from where your web app resides. There are lots of other great uses for it too, like streaming videos or distributing firmware updates to IoT devices.
Autoscaling, Azure Redis Cache, and Azure CDN are complementary approaches to increasing performance. You can use all three at the same time to get the best results.
The next thing to look at to make your web app perform reliably is high availability. An Azure App Service Web App is only deployed in one region, so to ensure that it can survive a regional outage, you need to deploy a standby copy of the web app in another region. Ideally, you should deploy it in the region that’s paired with the first region. If there’s a major outage, Microsoft will prioritize bringing up at least one region in every pair.
Under normal circumstances, you’ll want all of your users to go to the web app in the primary region, but when there’s an outage, you’ll want to fail over to the secondary region. Azure Traffic Manager can handle this sort of requirement using priority routing, which used to be called failover routing.
If you have a database behind your web app, which is usually the case, then you’ll have to configure a failover solution for that as well. For example, if you’re using Azure SQL Database, then you’ll need to configure active geo-replication.
Even if you’ve set up a secondary region, you’ll still want to configure backups so you can recover from data corruption problems. You can create backups manually, but of course, it’s much better to automate them. It’s quite easy to do this in the Azure portal. You go into the Backup Configuration page for your app and tell it which storage container to use. To protect against regional failures, you’ll want to use geo-redundant storage. Then you turn on “Scheduled backup”, tell it how often to run the backup, when to start the schedule, and how long to retain the backups. If your app uses a database, you can enter the connection string and it’ll back that up too. The backup and restore feature is only available in the Standard service tier and higher.
Speaking of service tiers, let’s have another look at those. With the Free and Shared plans, your apps share VMs with other customers, so they’re only meant to be used for development and testing. The Basic tier is the first “real” tier, but if you scroll down, you’ll see that it’s missing a lot of really important features, like deployment slots, autoscaling, Traffic Manager, and backups. So you shouldn’t use it for apps that always need to be available.
While we’re here, I should mention what the “Always On” feature does. Normally, when a web app is idle for a period of time, it gets unloaded, which saves resources. If you need an app to stay loaded all the time, then you can enable “Always On”. The main reason to do this is if you have long-running background jobs.
The main advantage of Premium over Standard is that it provides 250 gig of disk space and up to 20 instances. You can go even higher than that with the Isolated tier. This gives you an isolated, dedicated environment. You’d use this if you need more than 20 instances or if you need secure network isolation or if you need instances with a high memory to CPU ratio.
There’s also an option to run App Service on Linux. It’s kind of confusing the way it’s shown in this chart because it looks like it’s separate from the other tiers, but in fact, you can choose from Basic, Standard, Premium, and Isolated for Linux too. You can’t choose Free or Shared, but that isn’t much of a loss. It does have a different feature set, though, which is why it has its own column in the table. In my opinion, the most important missing feature is Traffic Manager. Nonetheless, if you have an application that needs to run on Linux, then App Service for Linux will work very well.
So it’s easy to host web apps using Azure App Service. How about hosting web APIs? App Service makes that easy too. First, you create an API App in Azure App Service. Then you create a REST API using a development tool, such as Visual Studio. Once your code is ready, you push it to the Azure API App. In Visual Studio, you do this by clicking “Publish” and selecting the API App you created earlier. Your API is now hosted in App Service.
By the way, you don’t have to develop your API in .NET. You could develop it using Java, PHP, Node.js, or Python, if you prefer.
There are several ways to secure your API, but they all involve Azure Active Directory (or AAD).
The most basic way is to use AAD alone. To get AAD to handle authentication, you need to register both the API and the client applications using it in your Azure AD tenant. Then you grant permissions in AAD for the client applications to call the API. The applications can then use OAuth2 access tokens to call the API.
A variation of this method is to use the AAD B2C service. It’s designed for customer-facing web and mobile apps, so it has additional capabilities, such as letting users sign up for an application using social media accounts. You still need to register your client application in your Azure AD tenant, just like with the classic AAD method.
You can add yet another layer on top by using the API Management service. As I mentioned earlier, this service is a gateway to your APIs. You can configure it to use either classic AAD authentication or B2C authentication to secure your APIs. This gives you all of the advantages of the API Management service, such as monitoring and rate limiting, while letting you secure your APIs using your preferred method.
And that’s it for web applications.
About the Author
Guy launched his first training website in 1995 and he's been helping people learn IT technologies ever since. He has been a sysadmin, instructor, sales engineer, IT manager, and entrepreneur. In his most recent venture, he founded and led a cloud-based training infrastructure company that provided virtual labs for some of the largest software vendors in the world. Guy’s passion is making complex technology easy to understand. His activities outside of work have included riding an elephant and skydiving (although not at the same time).