The course is part of this learning path
Google Cloud Platform has become one of the premier cloud providers on the market. It offers the same rich catalog of services and massive global hardware scale as AWS, as well as a number of Google-specific features and integrations. Mastering the GCP toolset can seem daunting given its complexity. This course is designed to help people familiar with GCP strengthen their understanding of GCP’s compute services, specifically App Engine and Kubernetes Engine.
The Managing Google Kubernetes Engine and App Engine course is designed to help students become confident at configuring, deploying, and maintaining these services. The course will also be helpful to people seeking to obtain Google Cloud certification. The combination of lessons and video demonstrations is optimized for concision and practicality to ensure students come away with an immediately useful skillset.
- Learn how to manage and configure GCP Kubernetes Engine resources
- Learn how to manage and configure GCP App Engine resources
- Individuals looking to build applications using Google Cloud Platform
- People interested in obtaining GCP certification
- Some familiarity with Google Kubernetes Engine, App Engine, and Compute Engine
- Experience working with command-line tools
Welcome to part three. In this lesson, we're gonna cover just two things; launching a cluster and setting up a Docker image for deploy as a pod in the cluster. We'll go over both the GCP console approach and the CLI, which can be used from your local computer or a remote shell environment or the GCP cloud shell tool from the browser, so let's get right into it.
Now, recall that in order to have a cluster, we need to have nodes, which are just virtual machine instances. In the early days of Kubernetes, we would have to manually create those instances, install the Kubernetes agents, known as kubelets, and then, run some commands to wire it together. Now, with GKE, we can get a cluster up and running much faster. In the web console, it's as simple as clicking on the Kubernetes Engine menu and then, selecting the blue Create Cluster to kick off the setup wizard. I recommend going through this sort of hand-holding wizard approach your first time to get a sense of what configuration is available. We can set a name, a number of nodes, a machine type, a geographic region, and some other config options. GKE has some really nice defaults and it has options for things like CPU-intensive or memory-intensive applications. Once you finish running through it, you'll have a running Kubernetes cluster with some set of nodes.
Now, we should also take a moment to talk about node pools. Node pools are groups of nodes within a cluster that share the same configuration. This is a GKE-specific feature. It's an extension of Kubernetes functionality essentially. When we create a cluster in GKE, it will by default create a set of nodes known as the default node pool. We can then just work with that node pool or we can add additional node pools with different configuration. So for example, we might want to have a set of nodes dedicated to in-memory caches, so we create some instances with lots of RAM and a particular network config, and that could be our cache node pool. Each node pool can use distinct virtual machine images, distinct instance type, and storage options.
So you can create node pools in the web console or using the CLI tools with the gcloud container node-pools command. With the console, you can also upgrade or delete node pools as well. So to do this with the console, simply go into the Kubernetes Engine menu and click Edit on the cluster you wanna change. There should be a node pools section with expandable menus. Click on the one you want to change, set the size value to the desired count, and click Save.
So that's node pools. Now that we understand the makeup of clusters a little bit more, let's go ahead and create one. The command to instantiate our basic cluster is as follows:
gcloud container clusters create cluster-name
Now, this will just spin up a minimal cluster with default values and a specified name. You'll likely have to at least add a --zone argument if you don't have a default geographic zone configured. The container clusters create command actually has a lot of optional flags to configure all of the things we saw in the web console, stuff like machine type and region, number of nodes, etc.
Now, after running this command, at this point, all we have is a Kubernetes cluster running with no actual application. Now, we can launch an application in container form by using the deployment controller, that is by creating a deployment with a command like so. We do:
kubectl create deployment app --image=$ImageRepo:$Tag
This will create a running application here, named app, using an image of our choosing. So before we can run this, we need to deal with that
--image argument. Kubernetes needs the location of your container registry, as well as the tag. Now, the container registry, it doesn't actually have to be in Google Cloud. It could be Docker Hub or AWS ECR or somewhere else. As long as we can provide GKE with the right URL and credentials, we can access our images from anywhere.
Google Cloud, however, it does have a container registry service. In the console, all you have to do is enable it. You enable the Container Registry API and you can use it. It's just a couple of clicks. We'll have a documentation link here. And when you push Docker images to the registry, you should see them in the UI. You can click on the image's button in the Container Registry section and then, select individual images to see details. This is a really convenient way to check names, tags, dates, other information about your images.
So in our video demo, we'll run through the Docker commands for building images, logging in, and pushing them to the repo. For our purposes here, we can use one of GCP's sample applications for the image argument and we can run it as so:
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
So that's the hello app, tag 1.0, we run this and with that, we have deployed a container application to our GKE cluster. We can see cluster health in the console or by running kubectl or gcloud commands, and congratulations, you've got something running. However, this is just the start of the fun. We need to go a bit deeper. We need to understand what we just did.
In our next lesson, we'll learn more about controllers, services, and pods. It'll be a blast. See you there.
About the Author
Jonathan Bethune is a senior technical consultant working with several companies including TopTal, BCG, and Instaclustr. He is an experienced devops specialist, data engineer, and software developer. Jonathan has spent years mastering the art of system automation with a variety of different cloud providers and tools. Before he became an engineer, Jonathan was a musician and teacher in New York City. Jonathan is based in Tokyo where he continues to work in technology and write for various publications in his free time.