CloudAcademy
  1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Analyzing Resource Utilization on Azure

Cloudyn

The course is part of these learning paths

AZ-103 Exam Preparation: Microsoft Azure Administrator
course-steps 15 certification 6 lab-steps 6

Contents

keyboard_tab
Welcome
2
Azure Advisor Cost Recommendations
Resource Baseline
Monitoring Cost Consumption
Cost Management Report
15
Cloudyn5m 16s
Conclusion
16
play-arrow
Start course
Overview
DifficultyIntermediate
Duration54m
Students471

Description

This course looks into how to capture log data and metrics from Azure services and feed this information into different locations for processing. We take a look at diagnostic logging which can help to troubleshoot services and create queries and alerts based on that data. We also look into Azure Adviser, cost consumption reporting and how we can baseline resources.  This aims to be an introduction to these advanced areas of Azure services.

 

Learning Objectives

  • Understand how to use and configure diagnostic logging for Azure services
  • Gain an understanding of Azure Monitor and how to create and manage alerts
  • Review cost consumption reporting and how to create scheduled reports
  • Investigate different methods for baselining resources

Intended Audience

  • People who want to become Azure cloud architects
  • People preparing for Microsoft’s AZ-100 or AZ-300 exam

Prerequisites

  • General knowledge of Azure services

 

For more MS Azure-related training content, visit our dedicated MS Azure Training Library.

Transcript

So here we can see the management dashboard when we log into the azure.cloudyn.com website. We can see at the top here we have some different tabs. These are pre-canned reports to save you time. We can also look at a bunch of resource reports, and some reports around optimization. What we're gonna focus on today is trying to understand what this spike in usage was. So we have an on-demand spike here. This is showing reserved and different ways we can distribute compute instances, and this on-demand spike here obviously a lot of machines got spun up so we're trying to understand what that cost was. In this case, we're going to go look at the cost analysis, and actual cost over time to start with. We're trying to see if we can understand what this spike cost. So we can see here we have two different subscriptions, the labs and the critical infrastructure, and we can see over that same period, the 9th and the 11th, that we spent $600 over two days. So our goal here is to understand what that was. So let's try another report here. Cost analysis, cost over time.

 And now we're starting to get a closer look at the actual resources. So we can see we've got a pretty standard spend, and then these two spikes. Over here we've got some different filters. In this case what we want to do is try and drill into the resource groups, see if we can understand which resource group caused this solution to cost us more. So there's two different resource groups shown here. There's an azure stack production, $641 in a day, and we can see multiple resource groups deployed. Looks like some couchbase, OpenAM, bunch of Linux instances and systems, which look different from the rest of the month. So, this is a fairly good report and has told us where we want to, and again you can drill down into a lot more options and filters and find all that information you need. In our case, we would just like to schedule this report to arrive to us via email, or we also get to save it to our storage account. 

So before we do that, we need to set up the recipient list and/or storage account list. So I'll start with the storage account. So under the gear, we can see we have one provider already set up, and if we just do Add New, you can see here you have your options for a connection string, container, and you also have the choice of using AWS. In our case we already have one report store set up, so that's fine, we're gonna use that. Next we just want to check our email recipient list, so we have our recipient lists here. We can see we already have two recipient lists for budget exceeded and defaults. In our case, we're gonna create a new list, call it demo, demolist, and give ourselves an email here. So now that's set up. We can go back to our report, which was cost over time. And we can see the resource group filter's still on there. 

And if we go to actions, we can see here Schedule a Report. Scheduling we can choose to go, send via email, and choose our demo. So now we can see, send as email content, and attach that as an Excel attachment. We also have the option to save it to storage, and pick our infrastructure storage account, report store, and also ask for the different file type, do you want JSON or CSV data? In this case we'll pick JSON. And we want that to run weekly, so again you've got your choices here, maybe on the Monday we get that information. You also have the option to choose to run the report based on metric thresholds, so if you're running on budgeting, in this case we can see our budget here, our total budget is 12,000, and current cost is 8,000. So we could choose to have a cost vs budget metric, and choose to say when we send these alerts. In this case we're not going to use that feature, and we're just gonna click save. So now if we go to the My Tools, we can see now that report scheduled over time, cost over time has scheduled, so you can see the clock up there with the tick. This allows us to manage which reports are scheduled, and remove them if we no longer need them. So that was a very brief overview of the Cloudyn interface, there's a lot of power in here to analyze existing data, and it's up to you to go have an explore.

About the Author

Students498
Courses2

Matthew Quickenden is a motivated Infrastructure Consultant with over 20 years of industry experience supporting Microsoft systems and other Microsoft products and solutions. He works as a technical delivery lead managing resources, understanding and translating customer requirements and expectations into architecture, and building technical solutions. In recent years, Matthew has been focused on helping businesses consume and utilize cloud technologies with a focus on leveraging automation to rapidly deploy and manage cloud resources at scale.