The course is part of these learning paths
See 9 moreKubernetes has become one of the most common container orchestration platforms. Google Kubernetes Engine makes managing a Kubernetes cluster much easier by handling most system management tasks for you. It also offers more advanced cluster management features. In this course, we’ll explore how to set up a cluster using GKE as well as how to deploy and run container workloads.
Learning Objectives
- What Google Kubernetes Engine is and what it can be used for
- How to create, edit and delete a GKE cluster
- How to deploy a containerized application to GKE
- How to connect to a GKE cluster using kubectl
Intended Audience
- Developers of containerized applications
- Engineers responsible for deploying to Kubernetes
- Anyone preparing for a Google Cloud certification
Prerequisites
- Basic understanding of Docker and Kubernetes
- Some experience building and deploying containers
In this lesson, I am going to show you how to create and edit Google Kubernetes Engine clusters. I will also show you how to connect to your clusters using the kubectl command line tool. This should give you everything you need to get you started using Kubernetes on Google Cloud Platform.
The first thing you need to do is to log into the Google Cloud Console like I have. Once you have done that, you need to navigate to the Kubernetes Engine page by searching for “Kubernetes” or “GKE”. And then click on the link like so.
If this is your first time and you won’t have any clusters created and your screen should look like this. The next step should be pretty obvious. Just click on either one of these “Create” buttons here. Every time you create a new cluster you have to choose between a Standard and Autopilot cluster.
In this course, we are going to stick with using Standard clusters. These more closely match a standard Kubernetes installation. Autopilot clusters basically allows Google to manage everything for you, and it does this by hiding a lot of details. Since I want to actually show you these details, we will cover Autopilot in another course. So if you are following along, go ahead and click on the first “configure” button here.
Next, you will see the form for creating a cluster. There are quite a few options to choose from. And these options are spread across multiple screens. I am not going to try to go through every single option in detail. That would take way too long. Instead, I am going to cover the basics and we will mostly focus on configuring the worker nodes. Things like automation, networking, and security will be covered at another time.
Luckily, Google provides pretty good default values so pretty much all of these fields you can leave alone. And you can also always edit them later if you need to.
So our first step is to provide a name for your cluster. I will call this one “my-demo-cluster”. Now I want to jump ahead to the node configuration section called “node pools”. Here is where I can set the number of worker nodes for my cluster. And here is where I get to choose the type of machine to power each node. You will notice that the machine types here look like those available for creating a VM under Compute Engine.
I am going to set this cluster up to have 10 nodes. And then I am going to configure each node to have a basic, General Purpose CPU. Now I could optimize my nodes for CPU or Memory intensive workloads. Or I can even add GPUs if I was planning to do something like graphics processing or Machine Learning.
Remember, creating a cluster with a lot of powerful nodes is going to be much more expensive than creating a cluster with fewer and less powerful nodes. So only provision for what you need. For this demonstration, I don’t want anything too crazy. So I’ll just pick something small like the e2-micro.
And that is pretty much it. I can go ahead and click on the “Create” button now to create my cluster. If you want to know how to create this cluster using the command line instead of the web console, you can click on this “Command line” button here. This will generate the equivalent command for the options I selected. So you can copy and run this wherever the Google Cloud SDK is installed. Or you can run this command in Cloud Shell by clicking here. I am going to keep using the web console, so I’ll click on “Create”.
Creating a cluster takes a while. It has to provision my 10 nodes and set up the control plane. So while we wait for that to come available, let me show you some more advanced node options.
When you are setting up a new cluster, you need to understand what kinds of workloads you will be running. This will help determine how you configure your nodes. So at this point, it seems like all your nodes have to be the same type. However, what if you want to run many different types of containers? And what if some of them need access to a GPU and others do not?
Luckily we can support that through something called Node Pools. By default, every cluster starts out with a single node pool. A node pool is a group of nodes that all have the same configuration. However, you can have more than one node pool. So you see here I have a node pool called “default-pool”. Let me rename this to “pool-1”. And now I can also add a second pool by clicking up here on “Add Node Pool”.
This allows me to have two different types of nodes in the same cluster. So for example, I can set pool-1 to have 5 general purpose machines. And I can set pool-2 to have 2 powerful compute-optimized machines. This would allow my cluster to run lots of small containers, but it also can run a handful of more demanding containers as well. I don’t have to create multiple clusters for different kinds of workloads. I can run everything on a single cluster if I wish. I can even add a third node pool that has just a single node optimized for running machine learning jobs.
So node pools are there to provide a lot of flexibility. You can have multiple node types to support different workloads. And they also allow you to easily shrink or expand your clusters as needed. If I had a pool of 10 nodes and then I decided I only needed 5, I can do that. I can create my new pool with 5 nodes, migrate my workloads over to the new pool, and then I can delete the old pool of 10 nodes.
There is one more thing I want to show you. Up until this point, we are assuming that your workloads will be fairly predictable. However, this is not always the case. Sometimes you will need to be able to run short-term or ad-hoc jobs on your cluster. And there will be other times where your workloads might require more processing power or memory than normal. Luckily, GKE has a options to support auto-scaling.
Under the “Automation” tab here you can enable Vertical Pod autoscaling. Vertical Pod autoscaling will automatically adjust the CPU and memory resources allocated to your pods as needed. As you continue to develop your code and add new features, your containers will often require different amounts of resources. So that you don’t have to constantly re-benchmark your code to determine what those new requirements are, you can let GKE handle it for you. This means your pods get exactly the resources they need, and you are not wasting anything.
There is also horizontal pod scaling which automatically increases or decreases the number of pods in response to a workload's CPU or memory consumption. Horizontal scaling is not a cluster setting, you set it up when defining a workload.
In addition to automatically scaling your pods, GKE can also automatically scale your nodes as well. Here is the option to do just that. Node autoscaling allows GKE to dynamically adjust the number of nodes in your cluster to meet demand. You get to set the minimum and maximum number of nodes here. Less workloads mean less nodes and that results in lower costs. More workloads will create more nodes and higher costs, but your cluster will be able to handle the extra load without getting overwhelmed.
So to review: Vertical pod autoscaling automatically adjusts the resources assigned to your pods. Horizontal pod autoscaling can adjust the number of pods that are running. And node autoscaling automatically adjusts the number of nodes in your node pool. With these three options, you can use your cluster in the most efficient manner possible.
Alright, it looks like my original cluster is ready to use. I can click on the name to view the current settings. Here you can see all my selections. And then I can make any changes here as well. So if I wanted, I could enable Vertical Pod autoscaling by clicking on the edit button. Remember Horizontal Pod autoscaling is a workload setting, not a cluster setting.
I can review my node pools by clicking on the node tab. And if I want, I can enable node autoscaling by clicking here. And of course, I can add a new node pool too if I wish. I just have to give it a name, specify the number of nodes, and then pick the machine type. So remember, if you didn’t make the correct selections when setting up your cluster the first time, you can easily make those later. So now you know how to create a Kubernetes cluster and configure node pools.
Now, a cluster is not useful unless you can also connect to it and send it commands. So for that, you will typically use the kubectl command. So let me show you how to access kubectl and how to use it to connect to your new cluster.
You can run kubectl one of two ways. You can run it from Cloud Shell or from your local machine.
Let’s start out with Cloud Shell. Here the command is already installed for you. You just need to configure it to point to the correct cluster. The first thing I need to do is to set the project ID. In this example, it is already set, but it does not hurt to make sure that I am pointing at the correct project. I just run this command and then add in my project ID. Now when you are doing this, make sure to paste in the correct project ID. Don’t try to use “Daniel-Sandbox”. That won’t work for you.
OK, so now that it knows which project to use, next I need to tell it the cluster. And I can do that by using this command. I just need to specify the name of the cluster and I need to specify the zone or region the cluster lives in as well. In this case, my cluster is a zonal cluster. You can tell because the location ends with a letter. A regional cluster would just be something liek “us-central1”.
So this will point the kubectl command to the correct cluster. It also handles authentication. Now any command I run should be directed to my new cluster. So let me try to list out my nodes. And here are the 10 nodes in my cluster. So now I can run any kubectl commands I want. Ok, now you know how to set up access from Cloud Shell.
Next, I am going to show you how to set up access from your local machine. This requires a little extra work. First, you need to install and configure the Google Cloud SDK. Then once you have the “gcloud” command working, you can then install kubectl by running the following commands. First, I am going to use this command to download the latest updates. I want to make sure that I have the latest fixes and prevent any compatibility issues. Then I can install kubectl by running this command.
Once this finishes, I can then follow the same procedures we used for Cloud Shell. I set my project ID with this command. And then I point to my new cluster using this command. Finally, let me verify things by listing out my nodes. And there we go. I can now issue all the kubectl commands I want. So at this point, I am ready to start deploying workloads.
Let me show you one last thing. I want to quickly demonstrate how to delete a cluster. Remember as long as your cluster is running, you are going to get charged for it. When you are experimenting with GKE it’s a good idea to delete your clusters as soon as you are done with them. You can either delete your cluster using the web console by clicking on the name and then on “Delete”. Or you can delete the cluster from the command line by using this command. And that’s it. Now you know how to create, edit, and delete a cluster. And you know how to connect to it and issue commands using kubectl.
Daniel began his career as a Software Engineer, focusing mostly on web and mobile development. After twenty years of dealing with insufficient training and fragmented documentation, he decided to use his extensive experience to help the next generation of engineers.
Daniel has spent his most recent years designing and running technical classes for both Amazon and Microsoft. Today at Cloud Academy, he is working on building out an extensive Google Cloud training library.
When he isn’t working or tinkering in his home lab, Daniel enjoys BBQing, target shooting, and watching classic movies.