Logging is very important today given the volume and variety of data we deal with across different customer use cases. This course will enable you to take a more proactive approach towards identifying faults and crashes in your applications through the effective use of Google Cloud Logging. As a result, you will learn how to delegate the operational overhead to GCP through automated logging tools, resulting in a more productive operational pipeline for your team and organization.
Learning Objectives
Through this course, you will equip yourself with the required skills for streaming log data to Google Cloud Logging service and use metrics to understand your system's behavior. The course will start with an introduction to the Cloud Logging Service and then demonstrate how to stream logs using Cloud Logging Agent and the Python client library.
Prerequisites
To get the most out of this course, you should already have an understanding of application logging.
Intended Audience
This course is suited for anyone interested in logging using Google Cloud Platform (GCP) Cloud Logging.
Resources
- Source code for this course: https://github.com/cloudacademy/Managing-Application-Logs-and-Metrics-on-GCP
- Google Cloud fluentd source code: https://github.com/GoogleCloudPlatform/google-fluentd
- Google Cloud fluentd additional configurations: https://github.com/GoogleCloudPlatform/fluentd-catch-all-config
- Google Cloud fluentd output plugin configuration: https://cloud.google.com/logging/docs/agent/logging/configuration#cloud-fluentd-config
- Package release history: https://pypi.org/project/google-cloud-logging/#history
- Metrics Explorer pricing: https://cloud.google.com/stackdriver/pricing#metrics-chargeable
In this demo, we will explore the Metrics Explorer and play with metrics. We are back to the Google Cloud Platform Console UI. First let's navigate to Metrics Explorer by clicking on Navigation Menu bar, then scrolling down to Operations. Here, we hover on Monitoring and then select Metrics Explorer. If you are using the Monitoring Workspace for the first time, then it can take about a minute to set up the workspace.
Once you have set up the Monitoring workspace, you will see this screen. Here, we can either build a query by selection options from the drop down menu, or write a query using the Query Editor. For the demo purpose, we'll use the dropdown option to build a query. Under Find resource type and metric, we can either search for a particular resource by typing its name or click on, See all to view all the resource types or available metrics. For example, we want to see the trend of CPU utilization for a particular VM instance or all instances within a GCP project. To achieve this, I can search for VM instance and then look for CPU Usage.
Now we can filter the metrics based on GCB Project ID, or Instance Name, or Instance ID or Zone, et cetera if you want to. On the right-hand side, we have the chart for this metric. We can choose from predefined time ranges like one hour, six hour, one day, one week, et cetera, or we can define a custom range using the custom option and then defining the custom date and time range. Let's say we want to see the trend for one week. So we select the one week option. Now we can view this chart as a Line Graph, Stacked Bar, Stacked Area, or Heatmap. To save this chart, Let's click on Save button. We can provide the Chart Title and select the dashboard.
You can select New Dashboard to create a new one. New dashboard is now created, and we can view this newly created dashboard to check the chart. This is the dashboard UI. If we want to refresh the data automatically, we can toggle the auto refresh on. If we want to view the logs related to this metric, click on three dots and select on View Logs. This takes us to the Log Explorer and displays relevant logs. Similarly, we can explore and run ad-hoc queries for any predefined metrics or custom metrics.
Pradeep Bhadani is an IT Consultant with over nine years of experience and holds various certifications related to AWS, GCP, and HashiCorp. He is recognized as HashiCorp Ambassador and GDE (Google Developers Expert) in Cloud for his knowledge and contribution to the community.
He has extensive experience in building data platforms on the cloud and as well as on-premises through the use of DevOps strategies & automation. Pradeep is skilled at delivering technical concepts helping teams and individuals to upskill on the latest technologies.