1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Analyzing Resource Utilization on Azure



Azure Advisor Cost Recommendations
Resource Baseline
Monitoring Cost Consumption
Cost Management Report
5m 14s

The course is part of these learning paths

AZ-104 Exam Preparation: Microsoft Azure Administrator
course-steps 18 certification 6 lab-steps 16
DP-200 Exam Preparation: Implementing an Azure Data Solution
course-steps 11 certification 1 lab-steps 5
AZ-500 Exam Preparation: Microsoft Azure Security Technologies
course-steps 14 certification 1 lab-steps 4
AZ-103 Exam Preparation: Microsoft Azure Administrator
course-steps 15 certification 6 lab-steps 9
more_horiz See 2 more
Start course
star star star star star-half


This course looks into how to capture log data and metrics from Azure services and feed this information into different locations for processing. We take a look at diagnostic logging, which can help to troubleshoot services and create queries and alerts based on that data. We also look into Azure Adviser, cost consumption reporting, and how we can baseline resources. This is an introduction to these advanced areas of Azure services.

Learning Objectives

  • Understand how to use and configure diagnostic logging for Azure services
  • Gain an understanding of Azure Monitor and how to create and manage alerts
  • Review cost consumption reporting and how to create scheduled reports
  • Investigate different methods for baselining resources

Intended Audience

  • People who want to become Azure cloud architects
  • People preparing for Microsoft’s AZ-100 or AZ-300 exam


  • General knowledge of Azure services

For more MS Azure-related training content, visit our Microsoft Azure Training Library.


Here we can see the management dashboard when we log into the azure.cloudyn.com website. We can see at the top here we have some different tabs. These are pre-canned reports to save you time. We can also look at a bunch of resource reports and some reports around optimization. What we're gonna focus on today is trying to understand what this spike in usage was. So we have an on-demand spike here. This is showing reserved and different ways we can distribute compute instances. With this on-demand spike here, obviously a lot of machines got spun up, so we're trying to understand what that cost was. In this case, we're going to go look at the cost analysis and actual cost over time to start with. We're trying to see if we can understand what this spike cost. So we can see here we have two different subscriptions, the labs and the critical infrastructure, and we can see over that same period, the 9th and the 11th, that we spent $600 over two days. So our goal here is to understand what that was.

Let's try another report here. Cost analysis, cost over time. Now we're starting to get a closer look at the actual resources. So we can see we've got a pretty standard spend, and then these two spikes. Over here we've got some different filters. In this case what we want to do is try and drill into the resource groups and see if we can understand which resource group caused this solution to cost us more. So there's two different resource groups shown here. There's an azure stack production, $641 in a day, and we can see multiple resource groups deployed. Looks like some couchbase, OpenAM, a bunch of Linux instances and systems, which look different from the rest of the month. So, this is a fairly good report and has told us where we want to look, and again, you can drill down into a lot more options and filters and find all the information you need. In our case, we would just like to schedule this report to be sent to us via email, or we can also save it to a storage account. 

Before we do that, we need to set up the recipient list and/or storage account list. So I'll start with the storage account. Under the gear, we can see we have one provider already set up, and if we just do Add New, you can see here you have your options for a connection string, container, and you also have the choice of using AWS. In our case we already have one report store set up, so that's fine, we're gonna use that. Next we just want to check our email recipient list, so we have our recipient lists here. We can see we already have two recipient lists for budget exceeded and defaults. In our case, we're going to create a new list, call it demo, demolist, and give ourselves an email here. So now that's set up. We can go back to our report, which was “cost over time”. And we can see the resource group filter is still on there. 

And if we go to actions, we can see Schedule a Report. For scheduling, we can choose to send via email and choose our demo. So now we can see, send as email content, and attach that as an Excel attachment. We also have the option to save it to storage and pick our infrastructure storage account, report store, and it also asks for the file type, JSON or CSV? In this case, we'll pick JSON. And we want that to run weekly, so again you've got your choices here. Maybe on the Monday we get that information. You also have the option to choose to run the report based on metric thresholds. If you're running on budgeting (in this case, we can see our budget here; our total budget is 12,000, and current cost is 8,000), we could choose to have a cost vs budget metric and choose to say when we want to send these alerts. In this case, we're not going to use that feature, and we're just going to click Save. So now if we go to My Tools, we can see that the “cost over time” report has been scheduled. You can see the clock up there with the tick. This allows us to manage which reports are scheduled and remove them if we no longer need them.

So that was a very brief overview of the Cloudyn interface. There's a lot of power in here to analyze existing data, and it's up to you to go explore it.


About the Author


Matthew Quickenden is a motivated Infrastructure Consultant with over 20 years of industry experience supporting Microsoft systems and other Microsoft products and solutions. He works as a technical delivery lead managing resources, understanding and translating customer requirements and expectations into architecture, and building technical solutions. In recent years, Matthew has been focused on helping businesses consume and utilize cloud technologies with a focus on leveraging automation to rapidly deploy and manage cloud resources at scale.