1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Kubernetes Engine (GKE)

Logging and Monitoring

Contents

keyboard_tab
Introduction
1
Introduction
PREVIEW2m 36s
Clusters
2
Configuration
4
Workloads
6m 30s
5
6
Storage
6m 59s
7
Security
4m 11s
Demo

The course is part of this learning path

play-arrow
Start course
Overview
DifficultyIntermediate
Duration55m
Students1077
Ratings
4.6/5
starstarstarstarstar-half

Description

Kubernetes has become one of the most common container orchestration platforms. It has regular releases, a wide range of features, and is highly extensible. Managing a Kubernetes cluster requires a lot of domain knowledge, which is why services such as GKE exist. Certain aspects of a Kubernetes cluster vary based on the underlying implementation.

In this course, we’ll explore some of the ways that GKE implements a Kubernetes cluster. Having a basic understanding of how things are implemented will set the stage for further learning.

Learning Objectives

  • Learn how Google implements a Kubernetes cluster
  • Learn how GKE implements networking
  • Learn how GKE implements logging and monitoring
  • Learn how to scale both nodes and pods

Intended Audience

  • Engineers looking to understand basic GKE functionality

Prerequisites

To get the most out of this course, you should have a general knowledge of GCP, Kubernetes, Docker, and high availability.

Transcript

Hello and welcome. In this lesson, we'll be talking about logging and monitoring inside of the GKE cluster. By the end of this lesson, you'll be able to list the two GKE stackdriver options, describe the difference between the two options, describe how to enable the different options, list some of the application logging best practices, and list some of the application monitoring best practices.

A platform such as GKE is the combination of dozens of components and with so many moving parts, there's a lot of system-generated information. And if we include our own application containers, we have even more information to track.

Back in the old days, GKE allowed us to enable stackdriver monitoring and/or stackdriver logging. While these worked, they really weren't designed for Kubernetes specifically which meant it didn't have the best user experience.

Over time, Google rolled out Stackdriver Kubernetes Engine Monitoring as a single Kubernetes-focused dashboard. Because it was specifically designed to monitor Kubernetes, it provides a clearer picture of the cluster and the workloads running therein. Because it's so specific to Kubernetes, having this tailored solution allows us to keep track of the cluster as a whole much better and that includes information about the cluster services, workloads, etc.

Today, Google calls the original implementation the legacy stack driver option. It's being phased out and as of GKE 1.15, it isn't available as an option to select. Stackdriver Kubernetes Engine Monitoring has been the default since GKE 1.14. The way we go about enabling and disabling these different versions is that the legacy option is enabled and disabled via checkboxes in the console or a couple of SDK parameters on the command line and the new option is just a single checkbox in the console or flag on the command line.

Stackdriver Kubernetes Engine Monitoring doesn't explicitly mention logging in the name though it does include logging data. By default, it captures system and application logs though it is possible to ignore the application logs and only capture system logs.

Let's talk a bit more about application logging. The docker standard for logging is to write logs to standard output and standard error, and Kubernetes follows the same recommendation.

Every line that's written as output, either to standard out or standard error, is its own log entry. This can make troubleshooting a bit difficult when it comes to things such as stack traces because they're going to be spread across multiple lines and it just becomes a bit cumbersome to actually work with that. So another application logging best practice is to use JSON logs where each log entry is going to consist of a single line of JSON. The reason for this recommendation is that having JSON formatted logs makes it easier to parse with automated tools.

For some applications, we don't have the ability to adjust the logging or direct them to standard out. So the recommendation for this is to use sidecar logging. This involves adding a logging agent container that runs inside of the same pod as the application from which we want to log data. This allows us to have the logging agent pick up the log files from the core application and write them to standard out.

In addition to application logging sometimes there are application-level metrics that are worth tracking. Monitoring application containers can help us get a better understanding of our workloads. GKE uses an open-source service called Prometheus that's capable of auto discovering pods. Prometheus expects an HTTP endpoint called metrics that returns metrics in a particular format. This isn't really something that you need to create for yourself, rather, if you want to monitor application container metrics, it's best to use the Prometheus client libraries. For applications that are difficult or impractical to modify you can use a metric sidecar in the same way that you'd use the logging sidecar.

Alright, that's going to wrap up this lesson. Thank you so much for watching and I will see you in the next lesson.

About the Author
Students57793
Courses19
Learning paths15

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.