CloudAcademy
  1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Analyzing Resource Utilization on Azure

Viewing and Accessing Diagnostic Data

The course is part of these learning paths

AZ-103 Exam Preparation: Microsoft Azure Administrator
course-steps 15 certification 6 lab-steps 6

Contents

keyboard_tab
Welcome
2
Azure Advisor Cost Recommendations
Resource Baseline
Monitoring Cost Consumption
Cost Management Report
15
Cloudyn5m 16s
Conclusion
16
play-arrow
Start course
Overview
DifficultyIntermediate
Duration54m
Students481

Description

This course looks into how to capture log data and metrics from Azure services and feed this information into different locations for processing. We take a look at diagnostic logging which can help to troubleshoot services and create queries and alerts based on that data. We also look into Azure Adviser, cost consumption reporting and how we can baseline resources.  This aims to be an introduction to these advanced areas of Azure services.

 

Learning Objectives

  • Understand how to use and configure diagnostic logging for Azure services
  • Gain an understanding of Azure Monitor and how to create and manage alerts
  • Review cost consumption reporting and how to create scheduled reports
  • Investigate different methods for baselining resources

Intended Audience

  • People who want to become Azure cloud architects
  • People preparing for Microsoft’s AZ-100 or AZ-300 exam

Prerequisites

  • General knowledge of Azure services

 

For more MS Azure-related training content, visit our dedicated MS Azure Training Library.

Transcript

So what we're going to focus on in this session is looking at the diagnostic data that we've sent to the storage account and to log analytics. We're gonna have a look and interrogate the data and see what there and run a query against it. So first thing, we'll look at the storage account because that's an easy step. Using the Storage Explorer built in to Azure now, this is in preview. or we can use the Storage Explorer as a simple application. We'll expand the subscription and the storage blob where we put that data and if we have a look here under blob containers we can see we've got insight logs for the network security group event and network security group event rule counter. If we drill in to one of these logs, these containers, we can see we've got resource ID, subscriptions and we'll just go down through this chain, resource groups, Jenkins. 

So, each one of these providers you have that goes to the storage account will have a separate folder. Then if we go up to windows, we've got a date, time, day, hour,minute and at the end of that we have a PT1H.JSON file. If we download that file we can see inside the fileis a lot of information about thelogging that we've collected. So, if we scroll across we can see we've got deny, direction, priority. There's a lot of information regarding the resource type that we've recorded. It's very specific information to a network security group. So these logs exist one file per folder and you'll see the breadcrumb trail to that log here. If we go back to the tables under storage account we can see there's also a windows diagnosticinformation. So this is from the event table. Windows metrics, we can see we've gotcounters. So, here's the disk times, CPU times, mirror accounts, page faults. It's all those standard windows counters but that's stored in this table that we can query with many different tools. So, that's showing us what we've got in the storage account. 

Next we're gonna go to the log analytics instance. Go over here and log in. So if we go to the Azure diagnostics log analytics instance, Workspace Summaryand click add. We're gonna jumpstart our query language here with some pre-canned Microsoft queries around network security groups. So if I scroll down we can find the Azure Network Security Group Analytics. So this gives you a preview of what there is and we'll click create. So, that's been created. If we go to the resource, we can see the chart here is actually displaying on the data that we've already been collecting. Click on that summary chart and we'll drill in to the solution itself and we can see some addition queries. So what we're really interested in here is the data that's backing these queries. This is all the stuff that we've set up from the security groups and merged and learning how to query it ourselves. So this is diagnostic information that we wanna enrich and display through graphs, charts or PowerBI and how we need to display that information. In this case, we'll drill in to one of these existing queries and we can see is basically the query version for you. Some information here on the side with other information you could filter by. And if we look at the options here we can export our truncated CSV files or go to PowerBI. In that case we really wanna understand how to write our own queries because that's where were gonna be able to get specifically what we're interested in seeing. So, from here we can go to the advanced analytics and on the left we can see the Azure Diagnostics LMS workspace and then LogManagement. So, in this case, we're just going to delete this query and start again. So if we double-click on the left, the table, and click run which will start very simple, we can see we've been given 10 thousand items that's been limited. 

If we expand this we can see more information around the specific query. So there's all the different fields in the Blob. So if we click the direction, we ought to find the blocked traffic. So, down here you can click on this and that'll actually put in the filter for you. So, where TYPE_S is equal to ALLOW. So when we run this query, we can see we now have 4,564 records. So we also want to now summarize this. So, this query window has intellisense built in. If you've used Gloan which is like sequal syntex TSQL or parasol there's a lot of similarities. Although, this language is different. The intellisense will help us summarize and understand what we need to do. So here we can see if we click summarize, there's some information. It explains to us how we need to do it so we can do count by and then the field we want to. So in our case we want to summarize count recordsby direction. Andwe can see we've had as many packets going in as we have out. The next thing we're really interested in doing is seeing this in time boxes. So, what we're gonna use is, is a bin. So, bins allows us to aggregate this data into different categories of time. So if we do bin, I'll need Time Generated, and we'll do one hour buckets so we just type in 1HR there. And if we run that query we can now see the total traffic going in and out of over time and if we click chart we can see each packet coming in and out and that we've split that data. So, once we've got our specific queries we can create alert rules. Again for diagnostic information chart find, you can generate charts to put back on the dashboard yourself. A lot of other things you can do once you have this query from the diagnostic data. So, that's a brief overview of how to view the data in the storage account and how to write queries against that data.

About the Author

Students512
Courses2

Matthew Quickenden is a motivated Infrastructure Consultant with over 20 years of industry experience supporting Microsoft systems and other Microsoft products and solutions. He works as a technical delivery lead managing resources, understanding and translating customer requirements and expectations into architecture, and building technical solutions. In recent years, Matthew has been focused on helping businesses consume and utilize cloud technologies with a focus on leveraging automation to rapidly deploy and manage cloud resources at scale.