1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Cloud Platform Container Engine (GKE)

Cluster Management


Course Introduction
Course Conclusion
Start course

Course Description:

GKE is Google’s container management and orchestration service that allows you to run your containers at scale. GKE provides a managed Kubernetes infrastructure that gives you the power of Kubernetes while allowing you to focus on deploying and running your container clusters.

Intended audience:

  • This course is for developers or operations engineers looking to deploy containerized applications and/or services on Google’s Cloud Platform.
  • Viewers should have a basic understanding of containers. Some familiarity with the Google Cloud Platform will also be helpful but is not required.

Learning Objectives:

  • Describe the containerization functionality available on Google Cloud Platform.
  • Create and manage a container cluster in GKE.
  • Deploy your application to a GKE container cluster.

This Course Includes:

  • 45 minutes of high-definition video
  • Hands on demo

What You'll Learn:

  • Course Intro: What to expect from this course
  • GKE Platform: In this lesson, we’ll start digging into Docker, Kubernetes, and CLI.
  • GKE Infrastructure: Now that we understand a bit more about the platform, we’ll get into Docker images and K8 orchestration.
  • Cluster Creation: In this lesson, we’ll demo through the steps necessary to create a GKE cluster.
  • Application and Service Publication: In this live demo we’ll create a Kubernetes pod.
  • Cluster Management: In this lesson, we’ll discuss how you update a cluster and rights management.
  • Summary: A wrap-up and summary of what we’ve learned in this course.

Hello, in this section we're gonna talk about cluster management, and I'll pose the question, what can we actually update once we have a created GKE cluster. The truth is that most of the baseline configuration for a cluster is immutable after it has been created. But what can be updated is almost exclusively related to node pools and the nodes within those pools. So clusters, as we've talked about before, consist of a cluster master, and that master is managed by Google.

But the multiple nodes associated with that master are managed with you and upgraded on their own schedule. So Google automatically updates the master to the current version of Kubernetes. But nodes associated with that master, you can upgrade on your own schedule. But the only caveat there is that all nodes should be kept within two minor versions of their master.

You're going to achieve this upgrade using the upgrade command. Passing in just a very few number of options. The primary one is being the cluster name and the cluster version. If you decide to emit the cluster version flag, this means that all nodes within the cluster will be upgraded to the same version as the master, but if used, as we just mentioned a second ago, you must specify a version within two minor releases of the masters version, at the highest patch number. Now when a container cluster is upgraded, this is the flow that it goes through.

An upgrade works by deleting all known instances one at a time and then replacing them with new instances running the desired Kubernetes version. So this allows us the optimal uptime while we're doing our upgrade, and before an instance is deleted, it's marked as un-schedulable and drained, and the reverse happens when an instance is available again and ready to be put back into the node pool, and it is marked as schedulable.

When a node instance is shut down as part of an update or downgrade, its replacement will always be assigned the same instance name. Now for the caveat or I guess the situation around persistent data. For the most part, any data in a host directory or entry directory volume and pods is gonna be deleted during an upgrade or downgrade. So if you need to preserve data across upgrades or downgrades, you need to use a pod with a GCE persistence disk volume.

Now this is different than the actual persistent disk created by default as the boot disk with compute engine instances. Boot disk are also deleted along with the instances as part of the upgrade or downgrade. But a GCE persistent disk volume is a disk created separately from all of your instances. The combination of GCP, IAM, and GKE gives us a superior level of enterprise access control. We get that nice fine grain access control where we don't have to give access on the project level. You can give resource level access at a level of granularity that you need. For example, you can create a control policy that grants subscriber roles to a particular cloud pub subtopic.

That sounds out of scope of GKE, but that does give you an idea of a level of granularity that you can get to. You also have several different ways to manage the access. You can do that via the web, via the GCP console. You can also do that programmatically and via the command line access. So you can automate it. You also have a nice built in audit trail. So to ease compliance for your organization, you get a full audit trail available to admins without really any additional effort. You've also got dozens of GKE specific permissions and associated roles.

This graphic doesn't really do the scale of permissions justice, but you can see for GKE at the top level are those four main roles. And then you've got several dozen permissions that roll up to each of those. So you can use these roles if they fit what you need. You also have the flexibility to take these dozens and dozens of permissions and formulate your own custom roles.

Now within GCP there's the concept of IAM service accounts, and these service accounts are used in a few different scenarios where a specific user interaction might not be involved. CI city automation is one key scenario here. As is service to service interaction where one GCP service is interacting with another.

An example of this is compute services interacting with cloud storage. Service accounts can be created in the portal and assigned roles or permissions just like user accounts. Now when we were authenticating against Kubernetes, or really I should say for Kubernetes, this first command that we see gcloud auth activate-service-account is to authorize access to deal with cloud platform using a service account. So this is a pretty straight forward call. What we are saying is, we want to activate service account and we're giving the path to the key. Which is what is downloaded when we create a service account as our private key.

As far as roles go, if you only need to deploy to GKE, container engine developer is probably enough for this role. Now when we look at the second command up here we've got gcloud container clusters get-credentials, and we're passing in the cluster, the zone, and the project. So what happens here is we're going to actually get the credentials and that's gonna be saved in a Kubernetes configuration file. So then when we execute Kubernetes commands, it's gonna use these credentials, and this is really especially important when you think about automated CICD scenarios.

About the Author
Steve Porter
Cloud Architect

Steve is a consulting technology leader for Slalom Atlanta, a Microsoft Regional Director, and a Google Certified Cloud Architect. His focus for the past 5+ years has been IT modernization and cloud adoption with implementations across Microsoft Azure, Google Cloud Platform, AWS, and numerous hybrid/private cloud platforms. Outside of work, Steve is an avid outdoorsman spending as much time as possible outside hiking, hunting, and fishing with his family of five.