1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. SAP on Azure - Monitoring and Optimization

Azure Advisor

Start course
Overview
Difficulty
Intermediate
Duration
33m
Students
74
Ratings
5/5
starstarstarstarstar
Description

SAP landscapes are substantial and complex deployments that require constant monitoring to ensure optimal and efficient operation. It's not practical to manually keep an "eye" on virtual machines and network resources to ensure they aren't overwhelmed by spikes in workload or sitting idle or underutilized, consequently wasting money. Azure provides several services and tools that assist in monitoring infrastructure use in near real-time with automated alerts and resource scaling. Azure provides built-in integration with SAP database and application logs, providing a complete picture of overall system performance. This course explores these Azure services and how you can use them to monitor and optimize your SAP workloads.

Learning Objectives

  • Get a foundational understanding of Azure Monitor and Network Insights
  • Learn how to set up basic networking monitoring
  • Understand what Azure Site Recovery is and how to set to implement it through the Azure portal
  • Learn about SAP Hardware and Cloud measurement Tools as well as SAP Application Performance Standard
  • Get an overview of Azure Advisor and how how to optimize Azure ExpressRoute

Intended Audience

  • Anyone who wants to learn how to monitor and optimize their SAP landscapes using Azure services
  • Those studying for Microsoft's AZ-120 exam

Prerequisites

To get the most out of this course, you should understand how to operate SAP workloads on Azure. If are new to this, we recommend you take the following courses first:

Transcript

Azure Advisor is accessible through the Azure hamburger menu on the top left of the homepage. It is a portal displaying advice on performance, optimization, security, reliability, and cost for your whole subscription-based on metrics and log analytics. It looks like there are no suggestions or advice for performance or operational excellence, which is excellent. There is a couple of recommendations related to cost, which turns out to be very handy because that is exactly what I want to talk about. So, let's drill down into the cost recommendations. Straightaway we can see three suggestions that will save you money with regards to virtual machines. Reserved instances essentially mean buying in bulk or committing to long-term rental rather than pay-as-you-go. Currently, a reserved instance term is either one or three years, and Microsoft says it is possible to save up to 72% over a pay-as-you-go subscription. Having said that, it is unlikely that you will be using pay-as-you-go with a full SAP deployment, which brings us to a couple of alternative cost-saving measures with regards to VM's. If a VM is underutilized in terms of its load, you can scale it down, or as they say here, right size it. If it is underutilized because there are periods where it's not being used at all, then you could shut it down for those times.

Let's look at reducing VM uptime first. Within the virtual machine, on the homepage, you can just hit the stop button. While this will significantly reduce costs, and you won't be charged for the compute component of the virtual machine, only for the associated disk storage, this isn't practical as an ongoing operational strategy, except if we're talking about dev or test environments. 

An alternative to manually stopping and starting virtual machines is to create a schedule that will put the machine to sleep between specified times. 

There are several ways you can get to this functionality through the portal, but they all involve using an Azure automation runbook. You can go into Create a resource and search for start/stop VM's during off hours as I'm doing here. Having selected the workspace and the automation account, it's just a case of configuring the start/stop times for the VM. If you already have an automation account set up, you can go into it, and in the left-hand menu under related resources, there is a start/stop VM function. Clicking on learn more about and enable the solution takes us back to the start/stop resource we have just been looking at. Enter the target resource group or groups, and in the VM exclude list enter any VM's that you don't want to have this schedule apply to. Then it's a case of setting the date you want the schedule to come into effect and what time you want the VM to start, followed by when the VM should stop each day and when the schedule should become ineffective. You can also send an email notification when the VM is started and stopped.

Another way to get a VM to start and stop on a schedule is through the VM resource itself by clicking on tasks under automation in the left-hand menu. Here we can see tasks to power off, deallocate, and start a virtual machine. Because this schedule will still use automation runbook functionality, we need to specify an account for the schedule to run under.

Going through to the schedule configuration, we can see that it is the same but different. You have slightly more control and options over times and frequency.

If there is no downtime per se, but the workload has changed in volume, you can resize the VM. Within the settings menu, you can select size to resize the virtual machine but be aware that resizing a running VM will cause it to restart. Resizing is very simple; you just select the new size and click the resize button. Again, manual intervention isn't practical, and it would be better for virtual machine compute power to scale up and down on demand. Enter the scale set. Creating a virtual machine scale set is similar to creating a single virtual machine with a couple of exceptions. We can see the basics tab of the VM Scale set looks almost identical to that of a single VM, except we can select multiple availability zones in the initial deployment.

Because the scale set is multiple machines, it needs to go behind a load balancer, so you can create one here or use an existing load balancer. We also have this additional tab called Scaling, where we can set the policy as either manual or custom. Under custom, we see that we have a minimum and a maximum number of instances, and we can scale out and back in based on CPU usage thresholds. This default policy is saying if the total CPU load exceeds 75% for more than 10 minutes, add another instance of the VM to the scale set. It also says scale in or down if the total CPU threshold is 25% or less. Reduce the number of instances by one. I'm going to leave this on manual at the moment because once the scale set has been created, there is a more sophisticated and nuanced set of rules we can use to manage the number of instances in the scale set. I'll just go ahead with all the default settings from here on in and create the scale set. Okay, the scale set has been created with two instances.

Let's see what options we have under scaling. Clicking on custom auto-scale shows that the minimum, maximum, and default values are all two, and there are no rules yet. I'll change the maximum value to 4 and add a rule. As I said, there are far more options when setting the scale rule here than in the creation wizard. I'll leave time aggregation as average, but there is the choice of minimum, maximum, sum, last, and count. In terms of metrics, there is a lot more to choose from here than just CPU load. I will, however, stick with the average CPU percentage load for this demonstration and set it to scale up by one instance if usage is 75% or greater for 10 minutes or more. The cooldown says, let's not make any more changes to the scale set for five minutes after a scaling event. This allows the scale set to re-establish some equilibrium, giving the virtual machines time to respond to the extra load that triggered the initial scale event. Not only can we increase the number of machines by one, but we can increase it by a percentage or increase the count to a particular number of instances. Scaling up isn't going to help us reduce costs, so we need another rule to scale down or in when the load reduces. It's the same procedure as before. I'll add another rule, but this time, when the average CPU usage is less than 40% for 20 minutes or more, I will reduce the number of instances by one. For this to be really effective as a cost-saving measure, I'll need to reduce my minimum number of instances from two to one.

About the Author
Avatar
Hallam Webber
Software Architect
Students
19795
Courses
48
Learning Paths
7

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.