1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Analyzing Resource Utilization on Azure

Diagnostic Logging

The course is part of these learning paths

AZ-500 Exam Preparation: Microsoft Azure Security Technologies
course-steps 11 certification 1 lab-steps 3
AZ-103 Exam Preparation: Microsoft Azure Administrator
course-steps 15 certification 6 lab-steps 8

Contents

keyboard_tab
Welcome
1
Introduction
PREVIEW1m 24s
2
Overview
PREVIEW1m 5s
Azure Advisor Cost Recommendations
Resource Baseline
Monitoring Cost Consumption
Cost Management Report
15
Cloudyn
5m 14s
Conclusion
play-arrow
Start course
Overview
DifficultyIntermediate
Duration54m
Students1122
Ratings
4.3/5
star star star star star-half

Description

This course looks into how to capture log data and metrics from Azure services and feed this information into different locations for processing. We take a look at diagnostic logging, which can help to troubleshoot services, and create queries and alerts based on that data. We also look into Azure Adviser, cost consumption reporting, and how we can baseline resources. This is an introduction to these advanced areas of Azure services.

Learning Objectives

  • Understand how to use and configure diagnostic logging for Azure services
  • Gain an understanding of Azure Monitor and how to create and manage alerts
  • Review cost consumption reporting and how to create scheduled reports
  • Investigate different methods for baselining resources

Intended Audience

  • People who want to become Azure cloud architects
  • People preparing for Microsoft’s AZ-100 or AZ-300 exam

Prerequisites

  • General knowledge of Azure services

For more MS Azure-related training content, visit our Microsoft Azure Training Library.

Transcript

Azure provides different logging options around resources. We are going to be taking a look at diagnostic options available to tenants. The Azure monitor helps facilitate logging and collection of these logs. There are three types of logs we need to be aware of: activity logs, diagnostic logs, and application logs, or guest OS logs. Let's take a look at where these logs exist within an Azure subscription in relation to the resources they are monitoring. Here we have a Non-Compute Resource, which is tightly integrated and delivered through Azure providers, for example a network security group. Next to this, we have a Compute Resource.

This is a virtual machine with a guest OS, like Windows or Linux, and it has an application installed like IIS or Apache. Activity logs provide a record of operations from a subscription level, executed against the resource. For example, when administrative tasks are performed on the resource, like creating a resource or updating the properties of an existing resource, this will generate an event in the activity log. Diagnostic logs are collected within a subscription at an Azure resource level for services like VPN gateways or network security groups. Not all Azure services have an option for diagnostic logging, and the level of detail you can capture varies. You can view a full list of resources that support diagnostic logging from the Microsoft Azure website. Application logs are logs generated by applications or services within a guest OS. These logs are collected from within the operating system through an agent. Application logs can be collected from core services, like Windows Event logs, or from applications like IIS. Diagnostic logging can be enabled in a couple of ways: using the Azure portal, PowerShell, Azure CLI or the REST API via Azure Resource Manager. 

When enabling diagnostic logging, you can choose where you want to export your logs to. You can export them to Log Analytics, to an event hub, or directly to a storage account. The Azure Portal allows us to easily browse through resources to enable diagnostic logging. Select the resource in question under the heading Monitoring and you should see a heading Diagnostic Settings. This will open a blade that contains diagnostic logging options for the type of resource you're currently looking at. If it's not already enabled, you'll see an option to turn on diagnostics, to enable diagnostic collection. This will give you a choice of export locations we looked at earlier and the options related to them. For instance, when you select storage account, you get a choice of which storage account and retention settings. You will also notice different log events to capture, specific to the resource you're working on. One of the first steps to take if you're experiencing an issue is to select the resource and go to the Diagnose and Solve Problems blade. 

This shows any general issues related to this resource's health. With regards to virtual machines on the diagnostic settings blade, you have an option to enable guest-level monitoring. Once enabled, you'll have more capability to collect logs from within the system without having to set up anything inside the system yourself. We will investigate more about these options in the demonstration. Log Analytics, formally known as Operations Management Suite, or OMS, is a log search analytics tool. It allows you to collect logs from different sources and correlate the data. You can write queries and create charts and graphs to help gain operational insight into your environment. You can create alerts based on metric thresholds or activity log events and consume pre-built management solutions, which includes queries and graphs. We can also send our diagnostic data to an event hub, which is a big data streaming platform. It can receive, process and transform thousands of events per second. Data can be stored or displayed as needed. Anomaly detection, live dashboards and application logging are among some of the ways we can utilize event hubs. Finally, we can send logs in their raw format to a storage account. It is important to understand the naming convention used to store it. Logs can be broken down by a subscription, resource group, provider and, finally, date and time with a JSON file called PT1H.json. An example of the file structure in the blob would look like this. The PT1H.json file contains an array of records, which we can see in the image shown. 

To help collate different logs together to determine the correlation between events, it is important to have a common structure to the logs. Azure diagnostic logs have that. This is a top-level diagnostic schema that includes several required fields and some optional fields. Required fields include time, resource ID, tenant, and operation name. Some optional fields are duration, correlation ID, and location. Each type of service will have its own specific data fields that relate to it. For instance, auditing within Azure Active Directory includes AuditEventCategory, IdentityType, OperationType, TargetResourceType, TargetResourceName, and AdditionalTargets. Or in the case of Azure Automation accounts, the additional schema fields include RunbookName, ResultType, and ResultDescription. On a virtual machine, you can enable boot diagnostics. 

Once you specify a storage account, you can then see the screen and view the serial log that is an output from the virtual machine. This is particularly useful for Linux machines, as they often have an output log to the console screen. You can see and download a screenshot of the current state the virtual machine is in from the screenshot tab. This is also accessible through PowerShell for automation. Using Resource Manager templates, you can automatically enable Diagnostic Settings at the time of resource creation. Using templates helps ensure consistency in how settings are configured. You may decide you always want to export diagnostic logs to specific services, like event hub or a central Log Analytics instance. Using templates can help allow for standards to be applied to resources at the time of deployment without user intervention. The resource type that supports this is providers/diagnosticSettings. Here is an example of a diagnostic resource. If we look closely we can see the storage account ID, event hub authorization rule, event hub name, or workspace ID. Now we also have settings regarding logs and the retention period and particular metrics you want to capture and retention.

 

About the Author

Students1226
Courses2

Matthew Quickenden is a motivated Infrastructure Consultant with over 20 years of industry experience supporting Microsoft systems and other Microsoft products and solutions. He works as a technical delivery lead managing resources, understanding and translating customer requirements and expectations into architecture, and building technical solutions. In recent years, Matthew has been focused on helping businesses consume and utilize cloud technologies with a focus on leveraging automation to rapidly deploy and manage cloud resources at scale.