Kubernetes has become one of the most common container orchestration platforms. It has regular releases, a wide range of features, and is highly extensible. Managing a Kubernetes cluster requires a lot of domain knowledge, which is why services such as GKE exist. Certain aspects of a Kubernetes cluster vary based on the underlying implementation.
In this course, we’ll explore some of the ways that GKE implements a Kubernetes cluster. Having a basic understanding of how things are implemented will set the stage for further learning.
- Learn how Google implements a Kubernetes cluster
- Learn how GKE implements networking
- Learn how GKE implements logging and monitoring
- Learn how to scale both nodes and pods
- Engineers looking to understand basic GKE functionality
To get the most out of this course, you should have a general knowledge of GCP, Kubernetes, Docker, and high availability.
Hello and welcome in this lesson we're going to answer the not-so-age-old question: "What is GKE?" This is a rather basic and open-ended question so let's drill down on some of the specific areas of focus. By the end of this lesson you'll be able to describe the difference between Kubernetes and GKE, you'll be able to describe how GKE implements a Kubernetes cluster and you'll be able to list three types of cluster configuration that impacts availability.
A Kubernetes cluster consists of multiple components all working together. Configuring, securing, running and maintaining a Kubernetes cluster requires a lot of domain knowledge and effort. GKE helps to mitigate the amount of domain knowledge and the effort required on our part and that's by taking the responsibility for most of the system management tasks.
So, to start let's describe the difference between Kubernetes and GKE. Kubernetes is an open-source container orchestration system. GKE is a Google-managed implementation of Kubernetes and it's intended to simplify the creation and operation of Kubernetes clusters at a high level. We can break a Kubernetes cluster down into two pieces: there's the control plane and the nodes. In a Kubernetes cluster the control plane components can be run on any host and that includes the nodes and the way that GKE implements the control plane is based on the very common pattern of running all of the components on the same host and that host is called the cluster master.
The cluster master, sometimes referred to as just master in the context of a Kubernetes cluster, runs all of the Kubernetes control plane components inside of a Google-managed virtual machine instance. The master runs inside of a Google-owned project for which we do not have access.
The cluster master is the single source of truth regarding the cluster's state. It's responsible for container scheduling, etc. As a component of the control plane, the Kubernetes API server process runs on the master and that means that when we use the kubectl binary, we're interacting with the cluster master. Nodes are GKE-managed GCE instances, that's Compute Engine instances. And they self-report to the cluster master using the kubelet binary that runs on every node. And although nodes are just Compute Engine instances, there are some image restrictions.
On a Kubernetes cluster, nodes are where we run our workloads. Workloads are a term Kubernetes uses to represent pods and controllers. Since controllers are responsible for managing pods and pods are responsible for running our containerized applications, another way to describe workloads might be simply to say that workloads are containerized applications and that means nodes are where we run our containerized applications. In order for Kubernetes to run Docker containers, all of the nodes include a Google-managed container runtime.
One of the nice aspects of Kubernetes clusters is that they allow operators to use the same unit of deployment for different types of workload. It doesn't matter what's inside of the container because containers are all handled the same way and while containers are standard units of deployment, the deployment destination matters.
Workloads have different requirements. Some might require GPUs, some might require high memory, etc. The way GKE enables us to use different hardware configurations is by allowing us to create groups of identical nodes where each group creates its nodes based on a template. GKE calls these groups of nodes node pools. Each cluster contains at least one pool and that's called the default node pool.
Since nodes are Compute Engine instances, they use a VPC for network connectivity, allowing clusters to leverage existing GCP networking functionality such as load balancers and firewalls.
Clusters route traffic between pods in one of two ways: Route-based or VPC-native. Route based uses a VPC network routing table. VPC-native uses alias IP ranges. This is a configuration option that you cannot change after cluster creation. VPC-native is the latest implementation providing the most functionality. It allows each pod to be assigned an IP address that's reachable from inside the current VPC as well as any connected VPCs. It allows firewall rules to apply to pod IPs and it's also the default option for new clusters.
Kubernetes uses persistent volumes as an abstraction over specific storage implementations. The implementation used by GKE is based on Compute Engine persistent disks. The GKE implementation supports two of Kubernetes' three access modes. GKE can support reads from multiple nodes, writes from a single node, though it doesn't support writes from multiple nodes.
So if we were to describe how GKE implements a Kubernetes cluster, we might say, "GKE places the control plane components inside of a Google-managed VM called the cluster master. Nodes are Compute Engine instances organized into pools of identically configured hosts, nodes run inside of a VPC which is responsible for pod-to-pod communication where VPC-native is the latest and default networking option, and route-based is the legacy implementation. And finally, Kubernetes persistent volumes can be backed by Compute Engine persistent disks."
Virtual machine instances are one of the fundamental cloud building blocks though their use requires us to consider how outages may impact system availability. Since a GKE cluster is built with virtual machines, we have to make those same considerations.
There are three types of cluster configuration which have an impact on availability. These are single-zone, multi-zone, and regional clusters.
A single-zone cluster runs both the cluster master and the nodes in the same zone, which means a zone-based outage will result in a full cluster outage. A multi-zone cluster runs nodes on multiple zones inside of a single region, though there's still only a single-zone-based cluster master. In the event of an outage, the impact will depend on which zone or zones are down. Any available zones that are running existing workloads will remain unaffected. A regional cluster runs nodes in multiple zones as well as multiple cluster master replicas. With multiple replicas in the cluster, it's more resilient though it also costs more due to those additional replicas.
GKE abstracts the cluster types in the console just a bit with a button labeled Location Type, where the options are zonal and regional. This is important because you can't change from a zonal to a regional cluster after cluster is created. However, you can change between a single-zone and multi-zone.
Alright, let's stop here. This has all been a high-level review of the GKE architecture and how it uses the underlying GCP functionality to implement a Kubernetes cluster. We'll continue to build on this throughout the course though for now, that's all for this lesson.
Thank you so much for watching and I will see you in the next lesson.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.