*** PLEASE NOTE: This content is outdated, and the course should be retired ***
Google Cloud Platform has become one of the premier cloud providers on the market. It offers the same rich catalog of services and massive global hardware scale as AWS, as well as a number of Google-specific features and integrations. Mastering the GCP toolset can seem daunting given its complexity. This course is designed to help people familiar with GCP strengthen their understanding of GCP’s compute services, specifically App Engine and Kubernetes Engine.
The Managing Google Kubernetes Engine and App Engine course is designed to help students become confident at configuring, deploying, and maintaining these services. The course will also be helpful to people seeking to obtain Google Cloud certification. The combination of lessons and video demonstrations is optimized for concision and practicality to ensure students come away with an immediately useful skillset.
Learning Objectives
- Learn how to manage and configure GCP Kubernetes Engine resources
- Learn how to manage and configure GCP App Engine resources
Intended Audience
- Individuals looking to build applications using Google Cloud Platform
- People interested in obtaining GCP certification
Prerequisites
- Some familiarity with Google Kubernetes Engine, App Engine, and Compute Engine
- Experience working with command-line tools
Welcome to our video demonstration on GKE in GCP. That's Google Kubernetes Engine in the Google Cloud Platform. In this demo, we're gonna do a few fun things. First, we'll create a basic application package, using Docker, and then upload it to a registry. Second, we'll create a GKE cluster and deploy our app into it and see it running. And then finally, we'll expose the app to the internet before scaling it up and practicing deploying a new version of the app, an upgrade essentially. So we'll do most of these things using the GCP Cloud Shell. And also, maybe occasionally the Web Console a bit. So feel free to follow along at home in your own account.
Now, this tutorial comes mostly from GKE's own documentation. So we'll link that below. You're welcome to just go and look at that. It'll help if you have questions. If you do plan to follow along, be sure that the Kubernetes API is enabled. You can do this by just going into Kubernetes Engine on the console. Or you can do it if you're running from your local laptop or something, you can run the command:
gcloud components install kubectl
And it should just set it up for you. So make sure that's ready to go. So let's get started. Okay, so to start, we need a Docker image with our application code, we need to prepare that. So we could build our own Dockerfile from scratch and using our own app code and test everything, but we're gonna be lazy. Instead, we're just going to save some time by taking one of GCP's test projects. So we can just clone that using git, like so. And once we have it cloned, we'll just cd into the directories, we wanna use their hello-app, and what we need to do after that, so it's actually four commands, we clone it, we cd into the directory, we're gonna export a project ID environment variable, and then we're gonna do the Docker build.
So we're gonna do export PROJECT_ID=[PROJECT_ID]
, which we can get right from the console. And then we're gonna do a Docker build of that. So the only thing you might have to look up is the project ID. You can get it right from the console if you don't know it. And there's other ways to get it. We'll have a link for that. But once that's done, you'll build your image and you'll have it locally in your Cloud Shell environment.
Now what we have to do after that is, so you can see it building, just give it another sec. But what we have to do after it's finished building, see it's finished right here, is we have to push the image to a Docker registry. Because we can't play from our shell environment, we need it in Google Cloud registry. And that we can do pretty quickly. We can do that with just two commands. We need to first run this. This is gcloud auth configure-docker
, and what this will do is it will set the right credentials for a Docker push command, for Docker to push the image to our container registry. And then we just run the Docker push command, which will take the image that we just built, and it will push it to our Google Cloud container registry with the ID that we set, and with the tag that we set, we have hello-app tagged with the v1, it's our version one. So just give this a sec, it needs to execute. And there you go. Well, it was already there, but it'll go for the first time, you'll see it.
Okay, so now that we have the image in the Docker in the container registry, we need to deploy it somewhere, right? We need a cluster to deploy it too. So how can we do that? How can we get a GKE cluster running? There are a few ways to do this. But the simplest way, if we don't already have a cluster running, is to use the gcloud container clusters create
command. Now before we run that, we're going to set the project ID in our environment.
Okay. And so after that, we have to set our zone, our region where we're going to actually deploy or create the cluster. So we'll do US east1-b. And so these two configuration options, the project ID and the zone, this just tells Gcloud what project to associate the cluster with, and where geographically to create it.
So once we have that configuration set, we can run the container clusters create
command. Now we see a couple of warnings here, just issue going on around this time, November 2019. But after a while, it'll take a few minutes, we will see the cluster come up. And it takes a minute because it has to actually create the VM instances that will host the Kubernetes infrastructure and it has to set up some networking. But once it completes, you should be able to see the instances with a gcloud list command. So eventually the cluster will come up, it'll be finished, it'll be in a healthy state.
We can see the instances if we run a compute instances list
command. So you can see these two here are the two instances we created for GKE hello-cluster. And we can also see it from running a container clusters list. So we should see our cluster come up, and we should see it's in status running two nodes. So that should be good. So now that we have a cluster running, and we have an image that's been uploaded, how do we go about deploying that image into our cluster, right? It must be pretty involved, we have to set up deployment configuration, we have to download the image, we have to run it, we have to sync everything, you know, probably requires a lot of work, right? Well, no, actually, it's pretty simple. We can do the deployment with a single command. We run kubectl create deployment
like so, we pass a couple of arguments. We're gonna call it hello-web, that's the name of our deployment. And we're going to pass in the image we wanna use with --image. And that's our gcr.io. That's the cluster, the container repository for GCP. And we'll put in our project ID and the name and tag we had.
So this literally does everything, this one command. We're leaning on the deployment controller functionality to do most of the legwork. So we're telling Kubernetes to create a pod using that image. So we can actually see if we do kubectl get pods
. And we'll see there's our, the pod, the image running in the deployment. And it's going to create the deployment controller. That's what keeps everything running. And it's what moves us from the empty cluster to the desired state. It'll keep things that way until we make a change.
So now let's have some fun. Let's go ahead and make a service. Let's expose our app to the public internet. We can do that also with a single command. We're just gonna do kubectl expose deployment hello-web
. So this is similar in that we are working with the deployment controller again. So please note that we use expose on the deployment, not on the pod. The deployment is responsible for the overall app configuration. And as far as best practices go, in general with Kubernetes, we wanna work at the highest level of abstraction possible.
So we wanna work with controllers instead of pods or nodes. So in this command, we tell Kubernetes to create a service, a load balancer type service, set to accept traffic on port 80. And that is going to target, it's gonna be route trafficked to port 8080 for the container. So we can actually see that as well, we can do kubectl get service
. And we should see there's our service running. There's actually two. There's the Kubernetes kind of default service here. And then there's the new one, the load balancer one right here. And we can see it has an external IP address. And we can see if we can hit that. And we can, there is our application. Hello world, version one, host, yada, yada, yada. So pretty cool. We've got a running service exposed to the internet.
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects
Lectures
Introduction - Section One Introduction - Kubernetes Concepts - Cluster and Container Configuration - Working with Pods, Services, and Controllers - Kubernetes Demo - Section Two Introduction - Creating a Basic App Engine App - Configuring Application Traffic - Autoscaling App Engine Resources - App Engine Demo - Conclusion
Jonathan Bethune is a senior technical consultant working with several companies including TopTal, BCG, and Instaclustr. He is an experienced devops specialist, data engineer, and software developer. Jonathan has spent years mastering the art of system automation with a variety of different cloud providers and tools. Before he became an engineer, Jonathan was a musician and teacher in New York City. Jonathan is based in Tokyo where he continues to work in technology and write for various publications in his free time.