1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Kubernetes Engine (GKE)

Cluster Creation


6m 30s
6m 59s
4m 11s
Start course

Kubernetes has become one of the most common container orchestration platforms. It has regular releases, a wide range of features, and is highly extensible. Managing a Kubernetes cluster requires a lot of domain knowledge, which is why services such as GKE exist. Certain aspects of a Kubernetes cluster vary based on the underlying implementation.

In this course, we’ll explore some of the ways that GKE implements a Kubernetes cluster. Having a basic understanding of how things are implemented will set the stage for further learning.

Learning Objectives

  • Learn how Google implements a Kubernetes cluster
  • Learn how GKE implements networking
  • Learn how GKE implements logging and monitoring
  • Learn how to scale both nodes and pods

Intended Audience

  • Engineers looking to understand basic GKE functionality


To get the most out of this course, you should have a general knowledge of GCP, Kubernetes, Docker, and high availability.


Hello and welcome. In this lesson, we're going to focus on cluster creation. By the end of the lesson, you'll be able to list the available methods for creating a cluster, describe how to create a zonal, multi-zonal, and regional cluster, you'll be able to describe the purpose of auto-upgrades and release channels, and you'll be able to list the supported host operating systems.

It's safe to say that clusters are the core of GKE. They put the K in GKE and there are two primary ways of creating a GKE cluster which are to use the console or the SDK. Now, both use the REST API behind the scenes which you could use directly should you have a use case for that.

There are a lot of cluster configuration options. Some are required, some are optional, some can be changed during the lifecycle of a cluster, and others can't be changed after it's created. Let's use the cluster creation form inside the console to serve as a visual representation of the different configuration options. The cluster name is a required field. It can't be changed after creation.

This setting here titled Location Type is how we specify availability. Recall that there are three types of cluster availability. We have single-zone, multi-zone, and regional. Selecting zonal will work for both single and multi. This is another setting you cannot change after cluster creation.

For now, we're going to leave this set to zonal and review the settings for a single-zone cluster. This drop-down here is contextual. It's based on the location type selection. Since we've selected zonal, this requires us to specify the zone where our cluster master and our nodes are going to run. Let's ignore all of the links on the side here that we haven't yet explored and pretend that we just click the create button with just the settings that are specified and the defaults that you don't see. This would create a single-zone cluster where the master and three nodes all run inside of the us-central1-c.

There's no way to tell from this page here how many nodes are actually going to be created and that's because these are just the high-level basic cluster settings. The number of nodes is set per node pool. You can see that if we click on the default node pool, it shows us a size of three nodes.

So that's basically it for a single zone cluster. Let's change this to a multi-zone cluster. Back on the basic settings form, the location type is going to stay set to zonal. Checking this box allows us to specify the additional zones where nodes are going to run. If we look at the cluster size under the node pool it shows now that we have six total nodes and that's because we've specified that we want three nodes per zone. The first zone is the master zone which contains the master and three nodes. The second zone is the zone we selected specifically and it contains three nodes and zero masters and that's because the zonal clusters will only ever have one cluster master.

Let's change this to a regional cluster type and we do that by changing the location type to regional. Notice that the drop-down changes to reflect that this is a region rather than a zone. By default, GKE is going to distribute the cluster master replicas as well as the nodes across three zones of its choosing. Though we can override that and specify the zones. A regional cluster is similar to a multi-zone cluster in that nodes are deployed to each specified zone. The difference here is regional clusters replicate the master to each zone as well.

Alright, let's pause here and address the second learning objective. The three types of cluster availability are single-zone, multi-zone, and regional clusters.

A basic description of how to create each might be something like this. Single- and multi-zone clusters both require the location type to be set to zonal. To go from a single-zone to a multi-zone cluster, all we have to do is specify the additional zones that we want to use to deploy nodes. Regional clusters require us to set the location type to regional, specify a region, and we can optionally manually select the specific zones. Otherwise, GKE will automatically select three zones within that region.

By default, the master version for the cluster is set to use a static version, where we can select some of the different current versions of Kubernetes.

Kubernetes is a quickly evolving piece of software with regular releases. Having Google manage your GKE cluster is something Google uses as a selling point for GKE. So let's talk about how that happens.

A cluster upgrade consists of changes to the cluster master as well as to nodes. Google upgrades these independently. Generically, when an upgrade occurs, the master is upgraded first followed by the nodes.

For zonal clusters, we're unable to interact with the control plane until the upgrade completes. For regional clusters, the cluster master replicas are upgraded one at a time in a rolling fashion and the same is true for the nodes in the node pools; one at a time rolling through the node pool. Google monitors the cluster upgrades and only involves SREs if there's an upgrade problem.

Google manages the cluster master upgrades automatically and they select the version to use based on usage statistics for each version of Kubernetes running on GKE. So Google attempts to identify the most reliable versions to become release targets. They do provide a list of quite a few options so if you need to go to a higher version than the default for some new functionality then you can request a manual upgrade. 

Under the cluster automation section, we have settings that allow some level of control over cluster maintenance. We can specify maintenance windows and exclusions. Keep in mind, these are not hard and fast rules, rather these are strong preferences. In some cases such as critical security patches, Google may ignore these windows.

So while automatic upgrades are the default, we can trigger manual upgrades via the console or SDK. Keep in mind they do ignore the maintenance window and they're only upgradeable in increments of one minor version. Recall that I mentioned the cluster master end nodes are upgraded independently, nodes are set to auto-upgrade by default though it is something that can be disabled per node pool. So manual cluster upgrades allow us to upgrade one minor version at a time on-demand.

Automatic upgrades allow us to specify a suitable window and have the cluster upgraded without having to request it. And should we want to, we can disable auto upgrades for nodes at the pool level. Auto upgrades using the static version is fine, though it's not all that predictable.

So GKE provides a supplemental feature called release channels that allow us to specify the upgrade cadence. There are three mutually exclusive which are Rapid, Regular, and Stable. Rapid is for non-production clusters, it's upgraded weekly with the latest changes. Regular is for production clusters and it's upgraded at least monthly. Stable is for production clusters that prioritize stability over anything else and it's only upgraded every few months.

Okay, if we were to try and describe release channels, we might say something such as, "Release channels allow us to specify how often these upgrades should be performed and how much potential instability we're willing to accept."

Changing topics, let's cover nodes a little bit more. GKE groups nodes into pools. Each node in the pool is the same. We can specify the node version which is currently set to the same value as the cluster master. We can also specify a node pool's size and remember this is per zone. Node pools support auto scaling, allowing new nodes to be added and removed dynamically, and nodes inside of a pool are created based on a node template. This is basically the same form that you'd use if you created a Compute Engine instance.

One of the differences being that we're limited by the number of operating system images we can use here. Specifically, we have Cos, Ubuntu, and Windows. By default, nodes use Google's own container-optimized OS called Cos. Cos is based on the chromium OS and is according to Google more stable and secure than the other images, which is why it's the Google-recommended option.

Both Cos and Ubuntu have a containerd variant that use containerd as the container runtime and if you're unsure about which one to use, then start off with Cos and only change if needed. Under the metadata section, we can add Kubernetes labels to the nodes in a pool. Now, these make it easy to have our workloads run on nodes with a matching label.

Some advice on cluster creation. If you're new to Kubernetes, be especially mindful of the number of nodes that you create as well as the machine type. Even reasonably small machines get rather expensive when you're running several, so make sure you know how many nodes are in your pool and how many zones they're going to be deployed to. This networking section here allows us to configure the network settings. 

Clusters are public by default meaning that the master and nodes get a public IP address, though there is a private option. The default networking mode for a new cluster is VPC-native. We'll cover that more later in the course so we're not going to dive in now. And with that, and I think we've seen all of the settings that are worth covering for now, I'm not going to create this because waiting to see that the settings we just entered match what's on the screen is not all that valuable.

So with that, thank you so much for watching and I will see you in the next lesson.

About the Author
Learning Paths

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.