Introduction & Overview
Creating an App Service Web App
Creating Web Service Containers
Configuring a Web App
The course is part of these learning pathsSee 2 more
You’ve got an idea for a great web app, or maybe you’ve already started building it. The next question is how are you going to get it out there on the Internet?
In this course, you will learn how you can quickly and easily set up a website and publish your app to the world with Azure App Service. Of course, web apps are a lot more complex and varied than just HTML pages and we will see how App Service supports a range of programming languages, frameworks, and even operating systems. We will explore features that greatly simplify application deployment and management, as well as those that will increase your app’s functionality like authentication and accessing on-premise data. App Service as with other Azure products has a raft of tools for monitoring and logging so you can make sure your app is performing optimally.
For any feedback, queries, or suggestions relating to this course, please contact us at firstname.lastname@example.org.
- Deploy apps using the Azure App Service
- Create a web app using the Azure Portal
- Create a web app using Visual Studio
- Understand the configuration and diagnostic capabilities available from Azure App Service
- Understand the advanced features of the service such as container deployment and deployment slots
This is a beginner level course suited developers or anyone wanting to know how to deploy web apps to the Azure cloud.
To get the most from this course, you should have a basic understanding of the software development lifecycle, while knowing how to code would be a plus.
Course source code
Visual 2019 with .NET Core 3.1 was used for the demonstrations in this course.
I want to turn our attention to functionality and configuration that is available through App Service to improve your app’s performance and flexibility. Namely, scaling your app up and out, configuration settings and cross-network integration. This is by no means all that is on offer but illustrates what I think are significant and useful features.
As I mentioned when setting up the first app the App Service Plan not only determines the amount of computing resources you have but also the features available to your app. One of the features not available to the F1 free tier is auto-scaling. As you can see here under configuration Azure is telling me to upgrade to a higher SKU, by which they mean plan, to enable additional features. Scaling up, or down, is Azure speak for changing your App Service Plan. On the Dev/Test tab, there are 3 plans, starting with F1 Free that has no features, through to B1 which includes custom domains with SSL and manual scaling. Once we go to the production tab all features are included and the difference between plans boils down to capacity, hardware, and cost.
Once we have scaled up to a production plan we now have the option of scaling out with Autoscale. Scaling out is the term given to increasing the number of hardware instances available for your app to run on. As you can see we have 2 options here. Manual scale where we just tell Azure to use a fixed number of instances all the time no matter the traffic or load. Custom autoscale lets our app access more instances or computing resources as required. The key here is the “as required”. You define rules based on system metrics to tell Azure when you want to increase the number of instances. I’m going to set up a simple rule to increase the number of instances by 1 when the average CPU load is greater than 80%. The time aggregation dropdown list determines how I will measure the metric. Average makes the most sense for CPU usage, although I could have gone with maximum. As you can see there are ample metrics to base your rule on. Next, you can select which instances you want to apply the rule to. In the case of CPU usage, you would probably want to select all instances. There’s no point in adding instances when just one of them is exceeding 80% CPU usage, although you would hope that load sharing would make sure usage is evenly spread, and for that very reason, I’m not going to check enable metric divide by instance count. Next, I set the greater than 80% condition with the operator dropdown and change duration to 15 minutes. It is the duration period along with the time aggregation that will determine the sensitivity of the rule. Imagine if I had set the time aggregation to maximum and the duration to 1 minute. New instances would be unnecessarily spun up for brief fluctuations in server load and before load balancing has had a chance to work. In fact, Azure is aware of this and has a cool down parameter that is the number of minutes after the action, in this case adding an instance, before it starts to resample the rule’s metric.
Auto-scaling isn’t just about managing load and performance but also managing cost, and you can use the same rule functionality to scale back instances and consequently cost when not needed. Here I’m implementing a rule to scale back instances by 1 when average CPU usage drops below 10%. The Instance limits below the rules section make sure I don’t drop to zero instances with very low CPU use.
So far I’ve been adding rules to the default scale condition. It is possible to have multiple conditions and those additional conditions have a schedule parameter that allows you to specify when the condition is active.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.