Azure Advisor Cost Recommendations
Monitoring Cost Consumption
Cost Management Report
The course is part of these learning pathsSee 2 more
This course looks into how to capture log data and metrics from Azure services and feed this information into different locations for processing. We take a look at diagnostic logging, which can help to troubleshoot services and create queries and alerts based on that data. We also look into Azure Adviser, cost consumption reporting, and how we can baseline resources. This is an introduction to these advanced areas of Azure services.
- Understand how to use and configure diagnostic logging for Azure services
- Gain an understanding of Azure Monitor and how to create and manage alerts
- Review cost consumption reporting and how to create scheduled reports
- Investigate different methods for baselining resources
- People who want to become Azure cloud architects
- People preparing for Microsoft’s AZ-100 or AZ-300 exam
- General knowledge of Azure services
For more MS Azure-related training content, visit our Microsoft Azure Training Library.
So what we're going to focus on in this session is looking at the diagnostic data that we've sent to the storage account and to Log Analytics. We're going to have a look and interrogate the data and see what’s there and run a query against it. So first thing, we'll look at the storage account because that's an easy step. We’ll use the Storage Explorer built into Azure now. We could also use the Storage Explorer as a simple application. We'll expand the subscription and the storage blob where we put that data, and if we have a look here under blob containers, we can see we've got insight logs for the network security group event and network security group event rule counter. If we drill into one of these logs, these containers, we can see we've got resource ID, subscriptions, and we'll just go down through this chain, resource groups, and Jenkins.
So, each one of these providers you have that goes to the storage account will have a separate folder. Then if we go up to windows, we've got a date, time, day, hour, minute, and at the end of that, we have a PT1H.JSON file. If we download that file we can see inside the file is a lot of information about the logging that we've collected. So, if we scroll across we can see we've got deny, direction, priority. There's a lot of information regarding the resource type that we've recorded. It's very specific information to a network security group. So these logs exist one file per folder and you'll see the breadcrumb trail to that log here. If we go back to the tables under storage account, we can see there's also Windows diagnostic information. So this is from the event table. Under Windows metrics, we can see we've got...counters. So, here's the disk times, CPU times, mirror accounts, page faults. It's all those standard windows counters but they’re stored in this table that we can query with many different tools. So, that's showing us what we've got in the storage account.
Next, we're going to go to the Log Analytics instance. Go over here and log in. So if we go to the Azure Diagnostics Log Analytics instance, Workspace Summary and click add, we're going to jumpstart our query language here with some pre-canned Microsoft queries around network security groups. So if I scroll down we can find the Azure Network Security Group Analytics. So this gives you a preview of what there is and we'll click create. So, that's been created. If we go to the resource, we can see the chart here is actually displaying the data that we've already been collecting. Click on that summary chart and we'll drill into the solution itself and we can see some additional queries. So what we're really interested in here is the data that's backing these queries. This is all the stuff that we've set up from the security groups and merged and we’re learning how to query it ourselves. So this is diagnostic information that we want to enrich and display through graphs, charts or PowerBI and how we need to display that information. In this case, we'll drill into one of these existing queries and what we can see is basically the query version. There’s some information here on the side with other information you could filter by. And if we look at the options here we can export to CSV files or go to PowerBI. In our case, we really want to understand how to write our own queries because that's where we're going to be able to get specifically what we're interested in seeing. So, from here we can go to the advanced analytics, and on the left, we can see the Azure Diagnostics OMS workspace and then Log Management. So, in this case, we're just going to delete this query and start again. So if we double-click on the left, on the table, and click run (we’ll start very simple), we can see we've been given 10,000 items (that's been limited).
If we expand this we can see more information around that specific query. So there are all the different fields in the row. If we click the direction, we ought to find the blocked traffic. Down here, you can click on this and that'll actually put in the filter for you: where TYPE_S is equal to ALLOW. When we run this query, we can see we now have 4,564 records. We also want to summarize this now. This query window has intellisense built in. If you've languages like SQL, TSQL, or PowerShell, there are a lot of similarities. Although, this language is different. The intellisense will help us summarize and understand what we need to do. So here we can see if we click summarize, there's some information. It explains to us how we need to do it so we can do “count by” and then the field we want. In our case, we want to summarize and count records by direction. And we can see we've had as many packets going in as going out. The next thing we're really interested in doing is seeing this in time boxes. What we're going to use is a bin. Bins allow us to aggregate this data into different categories of time. So, if we do bin, I'll need Time Generated, and we'll do one-hour buckets, so we just type in 1HR there. And if we run that query, we can now see the total traffic going in and out over time, and if we click chart, we can see each packet coming in and out and we've split that data. So, once we've got our specific queries we can create alert rules, again for diagnostic information you’re trying to find. You can generate charts to put back on the dashboard yourself. There are a lot of other things you can do once you have this query from the diagnostic data. So, that's a brief overview of how to view the data in the storage account and how to write queries against that data.
Matthew Quickenden is a motivated Infrastructure Consultant with over 20 years of industry experience supporting Microsoft systems and other Microsoft products and solutions. He works as a technical delivery lead managing resources, understanding and translating customer requirements and expectations into architecture, and building technical solutions. In recent years, Matthew has been focused on helping businesses consume and utilize cloud technologies with a focus on leveraging automation to rapidly deploy and manage cloud resources at scale.