GKE is Google’s container management and orchestration service that allows you to run your containers at scale. GKE provides a managed Kubernetes infrastructure that gives you the power of Kubernetes while allowing you to focus on deploying and running your container clusters.
- This course is for developers or operations engineers looking to deploy containerized applications and/or services on Google’s Cloud Platform.
- Viewers should have a basic understanding of containers. Some familiarity with the Google Cloud Platform will also be helpful but is not required.
- Describe the containerization functionality available on Google Cloud Platform.
- Create and manage a container cluster in GKE.
- Deploy your application to a GKE container cluster.
This Course Includes:
- 45 minutes of high-definition video
- Hands on demo
What You'll Learn:
- Course Intro: What to expect from this course
- GKE Platform: In this lesson, we’ll start digging into Docker, Kubernetes, and CLI.
- GKE Infrastructure: Now that we understand a bit more about the platform, we’ll get into Docker images and K8 orchestration.
- Cluster Creation: In this lesson, we’ll demo through the steps necessary to create a GKE cluster.
- Application and Service Publication: In this live demo we’ll create a Kubernetes pod.
- Cluster Management: In this lesson, we’ll discuss how you update a cluster and rights management.
- Summary: A wrap-up and summary of what we’ve learned in this course.
Hello and welcome back. In this section we're going to talk about cluster creation. So at this point we're gonna start our journey towards actually running a cluster on Google Cloud Platform container engine and the first step is to create the GKE cluster that you will later deploy your application and services to and configure using kubernetes.
As we go through creating our cluster there's a well-defined flow and in this graphic is a high-level flow showing the steps that we're about to go through in order to create a GKE container cluster. First thing's first, we create our node service. Then we create our docker image. We deploy our node service to that image. We upload that image to Container Registry on GCP and then we actually create our GKE cluster that is gonna run that image that we just uploaded. That contains our very simple node service that we're going to create in just a second.
So first thing's first, we're going to create a local docker image and the simplest way to do this is just executing the docker build CLE command against a docker file. So we're gonna move over to the terminal to do this right now in real time. Now we're over in our terminal window.
Before we get started building our local docker file I wanna show you what we're going to be working with. So to do that I'm gonna move over to Visual Studio code where I've got my docker file and a very simple server.js node file. So first off for our docker file you'll see that it's got just the very basic components in it that we talked about earlier and that is the base image and that's node 6.9.2. We're going to expose port 8080. We're gonna copy our server.js file over to it and then once we've got that copied over we're actually going to start our node server with server.js as the input to that.
So from the beginning we're going to have our base image, we're going to expose the port, and then we're going to serve up a node server on that. So our node service is very simple. So if we flip over to server.js you'll see that we're simply exposing an HTTP endpoint and then writing back a 200 hello world response to it. So pretty much the most simple service that you can create using node. So we're going to expose this VR docker file. So now we're going to move over to our terminal window and you'll see that I'm already in the right folder. I'm in GKE intro and if I just do a real quick LS on there you'll see that I've got my docker file, I've got a yaml file which we'll talk about in just a minute and then I've got my server.js node file. So here all we're going to do is we're going to do a very simple docker build period since we're already in the directory and if I execute that we're done. And were done very quickly because I've already got this in my cache so I've got the components of the base image already locally.
If I did not have that then it would take a while to download the base image in steps and then compress that together into the actual image. So it went very fast because I already had that image locally and all it had to do was essentially take steps three and four and copy those files and expose the node file once we've got it on there. Now we're ready to test and make sure that that docker image downloaded and built correctly locally. And we're going to do that just by listing our local docker images.
So we do that simply by using the docker CLI command for images and that's going to list the images that we have locally in our docker repository. So you can now see that we do have the node image. That's with tag 6.9.2 and we've got our image ID when that was created and the size. So now we're ready to have that uploaded to Container Registry. As I just mentioned, our next step is to upload our image to GCP Container Registry. So this registry provides you with a secure, reliable, and performant location to store all your container images. And Container Registry integrates seamlessly with popular continuous delivery pipelines.
So you can automatically build and publish container images to Container Registry from your source repository. You can also trigger builds by source code or tag changes in GCP's respositories, or GitHub or Bitbucket. You can also run unit tests, you can export artifacts, and more as part of your own CICD pipelines. So what we're gonna do now is we're actually gonna go back to the gcloud CLI as well as using docker and we're going to take a couple of steps needed to upload our container to Container Registry. Specifically the first thing we're gonna do is we're gonna tag an image. We're gonna tag that so it's recognizable by GKE and Container Registry. We're then going to push that image up to gcloud into our Container Registry.
Now that we've got our local docker image ready the next step is going to be to actually upload that to GCP into Container Registry. So first of all let's make sure that our image is in fact built and ready to go. We'll do a quick docker images. Okay, we see our node image with the tag 6.9.2, it's ready to go. So the first thing we need to do is we need to tag that image so it has an identifier that is known to GCP when we upload it. So that's a docker command. So we're just going to do docker tag. We need to give it the image that we want to tag. So I'm going to cheat. I'm going to copy and paste that image identifier over and then I'm gonna give it a tag. For this the tags always start with GCR.IO and then we're going to give it our subscription name. For us that is Cloud Academy dash exporter and then we're going to give it an identifier for this specific image.
We'll call this GKE and just to exercise the options we'll call this V2. We'll execute that command. Let's do another docker images just to make sure that worked. Okay, so now we've actually got that tagged. You can see that under the repository in the tag. We've got that same image that you'll now see twice, the original image and now one that actually is tagged. So now that we've got it tagged, the next step is to actually push that up into Container Registry. And I'll pop over into the browser so we can see Container Registry in the GCP console. Right now it's empty, we don't have any images stored in the registry yet.
If I hit refresh, improve it, nothing there. So we now, we're gonna go back over to our terminal window and we're gonna actually push that. So this is a gcloud command. So it's gcloud docker, I'm gonna do dash, dash, push and now we want to give it the image identifier. So this is gonna be what we just tagged it as. So we're going to give it, I'm gonna copy and past just to make this easy. So we're going to give it the tag and now execute the command. And so what this is doing is it's very similar to when we built the docker image locally. It's going to be taking portions of this image and then it's gonna be streaming them up to Container Registry.
Depending on the size of the image, your bandwidth, this could take any from a couple of seconds like it took for us to a couple of minutes. And so now this command has executed successfully. We got a successful return code and we can prove that this worked if we want to we can go back over to Container Registry and we can refresh and what we'll now see is that we have a GKE folder and we have our image now uploaded into that folder. So now this is ready to be used by the Container Platform on GCP. Now we're actually ready to create our GKE cluster. And this is gonna consist of the components that we spoke of before, right?
We've got kubernetes master API, and then we've got Compute Engine VM Worker nodes underneath that that are generated by Google and managed by kubernetes. This operation can be done using the CLI or the GCP console and it's important that for the data to stay close to the images and the cluster that you create your cluster in the same zone that your Container Registry storage bucket has already created. So we're going to demo this via the console to show just a better demo so you can see the UI but it's important to note that you can do this via both the console and the CLI. Now we're over in our GCP console and we're ready to create our very first cluster.
So I'm going to take the hamburger menu and navigate over to Container Engine. And we'll get there and we'll see that there's a clean slate. Nothing has been created yet. So we're going to create our very first cluster. We're going to do this using the Web UI. This could also be done just as easily using the CLI. So the first thing we're going to do is we're going to click the button Create a Container Cluster. Now we're going to go through this and you'll notice that most of these options are going to be either default or static.
The majority that we're gonna focus on are gonna be kind of these top third of the options. And most of those are around the metadata associated with our container cluster and the types of machines and the size of the cluster. So first of all we need to give it a name. We're going to stick to something pretty straightforward. So we'll call it gke-1. We don't have to give it a description, we certain can. We'll just type something temporary in here to fill up the hole. But that is an optional parameter.
We want to put it in a zone. I've got everything else, or most everything that I use, in zone east1b so we'll use that just to keep everything close. We're not gonna mess with the cluster version. This is actually if you clicked over this, this is actually the Kubernetes version for the master and the nodes underneath. And we'll talk a little bit more about versioning for Kubernetes specifically a little bit later on. Most of what you're going to work with when you're tweaking your clusters is gonna be the machines underneath those clusters and then the size of the cluster itself.
So when you're looking at the machine types you're gonna be tweaking this based on your workloads. What you need, how much memory, how much disk space, how much CPU. So we've got some defaults that we can use here and just pick from a very quick and dirty dropdown picker. We can also go through and customize this ourselves. So if we use customize we essentially get a slider. So we just can choose a custom machine type kinda based on sliding up the number of cores, sliding up and down the amount of memory that we need for each machine.
Now let's go back to the basic view. I do not wanna choose this 'cause that would be extremely expensive. So I'm gonna stick with the standard one CPU with 3.75 gigs of memory. And now when we look at the node images we wanna look at the size, right. This is the size of our cluster. How many nodes are we gonna have within our cluster. So this is what we want to scale up or down, depending on how much scalability, availability, and essentially resiliency that you need. And this will give you an idea of, based on the machine that we just picked, these are kinda your total numbers.
So we're gonna have three total cores for our entire cluster and then a total memory footprint of just over 11 gigs. And so these are the core properties that you're gonna focus on when you're creating your own cluster. We have a lot more that we can tweak on here. You'll see some beta settings here. You also see some settings around networking and stack driver logging and monitoring. There's also some that are hidden.
So if we click More you'll see that we can also add labels. So think of labels as more of an organizational pattern within GCP. We can also define additional zones that we want to deploy to. And then we can define our authentication, auto scaling, and then some more beta features. We also have project access down at the bottom. This is important to note and this is something that you'll see all over the GCP platform for the various resources. This is what controls what a resource has access to across the platform. So if a resource needs access to BigQuery for instance we would then tweak that access and instead of none we would enable access to BigQuery.
So now essentially this container engine cluster has been white listed and will be given access to BigQuery as a service. And then we'll just go down to the very bottom and click Create and what this is gonna do is this is going to spin for a little bit and then once this is done we'll have our container cluster. This takes a little while to create. Underneath the covers what's happening is this is creating both the VMs and the VM instance group that is going to be managed by Container Engine for this cluster. Node pools is a little bit of an advanced topic within Container Engine. It gives you a level of sophistication and flexibility that you would not normally have. And what it provides is for the creation of a cluster with a heterogeneous machine configuration.
And what this means is that usually all nodes are gonna be identical, right within a cluster. But what node pools allow you to do is it lets you create a pool of machines within your cluster that have different configurations, right? For example you might create a pool of nodes in your cluster that have local SSDs, GPUs, or larger instance sizes. And then with node labels it's possible to have nodes within your cluster and schedule your pods into specific nodes that meet their needs.
So perhaps a set of pods need a lot of memory and so you allocate a high-memory node pool and schedule them there. Or perhaps they need more local disk space so you assign them to a node pool with a lot of local storage capacity. And more configure options for nodes are being considered. One of the other scenarios that I've used this for is you might need a disparate node configuration where you wanna do A/B testing between two different configurations to look at the performance and optimal configuration for a given workload. GCP compute engine instance groups are how GKE node pools are actually implemented underneath the covers.
Each GKE node pool is backed by a set of compute engine VMs organized into an instance group. And you can see these VM instances in the cloud portal as well as monitor them with stackdriver. It's been just a couple of minutes so now we're back in the browser that our container cluster has successfully created. So now we see it in the list of container clusters. So we can now click on that and we'll go in and look a little bit at the details of what we just created. And this is going to mirror the settings that we just selected.
So you can see the master version the endpoint, the size, and the zone, and some other settings for the cluster. You also see the node pools within the net cluster. So you can have all of the settings here. We can edit this. We'll look at that a little bit later on when we look at maintaining a cluster. But what I wanna show you now is a little bit of what's underneath the covers for this cluster, so what actually got created when we created our container cluster. So if we scroll down a little bit we'll see the instance groups. So this instance group that got created is going to contain the VMs that actually feed this cluster.
So if we click on that that's going to take us to the compute engine section of GCP and we'll see that instance group. And what we can see is we can see just some monitoring metrics for utilization for that instance group. We can see the utilization disk and network usage. But what we can also see is what is actually contained in that instance group. An instance group is really just a container or an organizational unit for VMs. And in this case we're going to have three VMs, right which map to our number of nodes for our container cluster.
And so within this instance group that got created for our container cluster we can now see our three VMs. And one of the beautiful things about GCP as well as some of the built-in monitoring is we can look at these individual VMs as well as monitoring our cluster as a whole. So we can go in and look at the details of these individual VMs and see the details for each VM we can also do the same thing in stackdriver and monitor it that way and have a learning setup for it. And then we can also go so far as for each individual VM we can also SSH into that machine if we wanted to. So very cool visibility into what's actually running underneath our container cluster.
Now that we have our baseline infrastructure set up we get to the really fun part, actually deploying our application, our services to our cluster using kubernetes. We're going to go through in detail all the kubectl commands necessary to deploy your application, our services into our container engine cluster.
About the Author
Steve is a consulting technology leader for Slalom Atlanta, a Microsoft Regional Director, and a Google Certified Cloud Architect. His focus for the past 5+ years has been IT modernization and cloud adoption with implementations across Microsoft Azure, Google Cloud Platform, AWS, and numerous hybrid/private cloud platforms. Outside of work, Steve is an avid outdoorsman spending as much time as possible outside hiking, hunting, and fishing with his family of five.