1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Kubernetes Engine (GKE)

Workloads

Contents

keyboard_tab
Introduction
1
Introduction
PREVIEW2m 36s
Clusters
2
Configuration
4
Workloads
6m 30s
5
6
Storage
6m 59s
7
Security
4m 11s
Demo

The course is part of this learning path

play-arrow
Start course
Overview
DifficultyIntermediate
Duration55m
Students1077
Ratings
4.6/5
starstarstarstarstar-half

Description

Kubernetes has become one of the most common container orchestration platforms. It has regular releases, a wide range of features, and is highly extensible. Managing a Kubernetes cluster requires a lot of domain knowledge, which is why services such as GKE exist. Certain aspects of a Kubernetes cluster vary based on the underlying implementation.

In this course, we’ll explore some of the ways that GKE implements a Kubernetes cluster. Having a basic understanding of how things are implemented will set the stage for further learning.

Learning Objectives

  • Learn how Google implements a Kubernetes cluster
  • Learn how GKE implements networking
  • Learn how GKE implements logging and monitoring
  • Learn how to scale both nodes and pods

Intended Audience

  • Engineers looking to understand basic GKE functionality

Prerequisites

To get the most out of this course, you should have a general knowledge of GCP, Kubernetes, Docker, and high availability.

Transcript

Hello and welcome. The primary function of a container orchestration platform is to run containerized applications. Kubernetes categorizes the different types of containerized applications generically as workloads. This lesson is going to focus on some of the different types of workloads. By the end of this lesson, you'll be able to describe the use case for these different workloads, describe how to deploy workloads, and describe how to use node pools to support specific host requirements.

Engineers interact with Kubernetes in different ways. When an SRE looks at a cluster, they see the individual components. An application developer sees the node pools, though they see it as a singular entity; it's not a collection of nodes, rather it's a place to deploy applications with different hardware requirements.

Recall that workload is a synonym for containerized applications. At its core, Kubernetes is responsible for running pods and pods are an abstraction that defines one or more co-located containers. Pods are the smallest unit of deployment. They're basically an unmanaged process. By that I mean if the process stops for any reason, it's not going to be automatically rescheduled.

Now, because pods are just ephemeral processes, it's not common to use them directly. They're more of a lower-level building block.

The way that Kubernetes expects most of us to interact with pods is through the use of a Kubernetes controller, which introduces higher-level functionality related to the management and lifecycle of pods.

While there are different types, we're going to focus on three. Deployments are useful for stateless applications. Deployments create and manage identical pods called replicas, based on a pod template. If a pod stops or becomes unresponsive, the deployment's controller will replace it.

Stateful sets live up to their name, they're useful for stateful applications, that's things such as a database. Stateful sets create and manage pods that are expected to have a level of persistence. They allow for stable network identifiers and persistent storage.

Daemon sets are useful for background tasks such as monitoring by creating one pod per node.

The way we deploy a workload is through the use of the Kubernetes API server. Though using the API directly just isn't all that human-friendly so commonly we use the kubectl binary to interact with the API. We can create workloads in the console, however, the console currently only creates deployments.

If you're still new to Kubernetes and don't have the cube control binary installed, there are some easy options to get up and running.

First Cloud Shell has the kubectl binary pre-installed, making it a nice way to get up and running without having to install it yourself. Another easy option is through using the Cloud SDK which has the ability to install kubectl.

Kubernetes allows us to interact with different Kubernetes objects in both an imperative and declarative way. Now, that includes workloads, which means there are a few ways to create and manage workloads.

So we have imperative and declarative, and the core difference between the two approaches is that the imperative approach requires us to tell Kubernetes which actions to perform, whereas declarative tells Kubernetes what we want and it allows it to figure out how to produce our desired state.

There are two imperative approaches and one declarative approach. The imperative approaches are imperative commands and imperative object configuration. Kubernetes has several imperative commands such as kubectl run, expose, and auto scale. These commands allow for one-off operations where you can specify parameters as a command-line argument. For example, you could create a deployment with a single command and without any YAML files. The downside to this option is it limits us to the amount of configuration that we can specify.

The kubectl create, delete, and replace commands are examples of imperative object configuration. These options are still imperative, however, they leverage YAML configuration files. So if we were to run kubectl create to create a deployment and then we run it again, we're going to get an error because that resource already exists, so the imperative approach requires us to know the current state of the objects we're working with.

If it's already created, then telling it to create it is going to throw an error. The kubectl apply command is an example of the declarative approach. The declarative approach will compare our desired state to the current state and it's going to figure out how to make the two match. The declarative approach is item potent meaning that it can run multiple times without any side effect because once the current state matches the desired state there's nothing for Kubernetes to do so if we run the same command over and over again Kubernetes doesn't need to do anything. Now, neither approach is right or wrong. Like most things in the tech world, it's all about trade-offs.

If we really use this information to describe how to deploy workloads, we might say, "Workloads are deployed through the Kubernetes API server, commonly via the kubectl binary. There are three approaches to interacting with Kubernetes objects which are imperative commands, imperative object configuration, and declarative, where each has its pros and cons and is suitable for its own use cases."

When deploying workloads to a cluster that has only the default node pool, we can consider all nodes to be equal. However, if we need to support different hardware configurations, we can create additional pools. Kubernetes provides different methods for specifying on which nodes pods are run.

Node pools allow us to attach labels to nodes. Once nodes are labeled, we can tell Kubernetes to only deploy our workloads to nodes with the matching label. And the way we set that is to use the node selector field in the pod spec.

Another option is to use resource requirements, allowing us to define things such as CPUs, memory, etc., and with this method we're not targeting nodes specifically based on labels, rather we're able to target more indirectly by specifying our hardware requirements.

Yet another option is the node affinity selector which is similar to node selector. The difference being that node affinity is more flexible, including the ability to specify soft requirements which are basically strong preferences rather than hard requirements.

All right, let's summarize. The term "workload" is used generically to mean containerized applications. The different types of workloads are based on the different Kubernetes controllers of which there are several including deployments, stateful sets, and daemon sets. Workloads can be deployed to specific nodes by using node selectors, node affinity selectors, and resource requirements.

All right, that's going to wrap up this lesson. Thank you so much for watching and I will see you in the next lesson.

About the Author
Students57793
Courses19
Learning paths15

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.