1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Cloud Platform Container Engine (GKE)

Application and Service Publishing

Contents

keyboard_tab
Course Introduction
Course Conclusion
play-arrow
Start course
Overview
DifficultyIntermediate
Duration42m
Students344

Description

Course Description:

GKE is Google’s container management and orchestration service that allows you to run your containers at scale. GKE provides a managed Kubernetes infrastructure that gives you the power of Kubernetes while allowing you to focus on deploying and running your container clusters.

Intended audience:

  • This course is for developers or operations engineers looking to deploy containerized applications and/or services on Google’s Cloud Platform.
  • Viewers should have a basic understanding of containers. Some familiarity with the Google Cloud Platform will also be helpful but is not required.

Learning Objectives:

  • Describe the containerization functionality available on Google Cloud Platform.
  • Create and manage a container cluster in GKE.
  • Deploy your application to a GKE container cluster.

This Course Includes:

  • 45 minutes of high-definition video
  • Hands on demo

What You'll Learn:

  • Course Intro: What to expect from this course
  • GKE Platform: In this lesson, we’ll start digging into Docker, Kubernetes, and CLI.
  • GKE Infrastructure: Now that we understand a bit more about the platform, we’ll get into Docker images and K8 orchestration.
  • Cluster Creation: In this lesson, we’ll demo through the steps necessary to create a GKE cluster.
  • Application and Service Publication: In this live demo we’ll create a Kubernetes pod.
  • Cluster Management: In this lesson, we’ll discuss how you update a cluster and rights management.
  • Summary: A wrap-up and summary of what we’ve learned in this course.

Transcript

Hello, in this section we're gonna talk about publishing. And now that we have our baseline infrastructure set up, we're ready to go and actually deploy our service to our cluster using Kubernetes and we're gonna go through, kinda in detail, all of the Kubernetes commands necessary to deploy this service.

And we're primarily gonna be working at the pod level within Kubernetes. So think back to our previous coverage of Kubernetes orchestration, and pods are down at the lower level housing our actual containers. So a Kubernetes pod is a group of containers that are tied together for the purposes of both administration and networking, and it can contain a single container or multiple container. For today, here we're gonna simply use one container built with the Node.js image that we stored in our private container registry, and then it's going to serve out content on port 8080 for us.

And the first step of getting to that end goal of serving out content is we need to create our pod. What we're gonna do is we're gonna execute the Kubernetes run command with our image, and the port that we want to expose is inputs, and this command will actually create a Kubernetes pod, but initially the content or the return is only going to be available internally to GCP. What we're gonna do next after we create our pod is we'll expose traffic externally so we can get to that pod from the public internet. We're back over in our terminal window, and now that we have all of our pieces ready. We have our image uploaded to container registry, and we have our cluster created on GCP Container Engine.

So we're gonna use Kubernetes at this point to actually create our pod that puts those two pieces together. The tool we're gonna use is gonna be the kubectl CLI, and we're gonna use the run command. So that's gonna be kubectl run. The first parameter we're gonna pass in is just gonna be a name for the deployment. So we'll just call this gke-pod just to keep things simple.

Next we need to tell it what image we're going to use. So this is going to be the image that we tagged just a little while ago. So that starts with gcr.io/cloudacademy-sporter/gke v2. Now in addition to the image, we also wanna tell it that we are going to expose port 8080. There we go. Now we should be able to execute this command, and we get a deployment created response. So now we know that, that deployment was created successfully.

Now it's important to note that, that doesn't necessarily mean that our pod was created successfully. The pod is gonna got through a series of states where it creates the image. You may get some errors at this point. So for instance, if we gave it a bad image, it's not necessarily gonna give us an error at this point. It'll still tell us the deployment was created, but then that deployment could actually fail, and the way that we'll find out yay or nay whether we actually had a successful deployment is actually look at our pods.

So now if we do, if we do a kubectl, and we're just gonna say get pods. This is gonna tell us, here are a couple pods that we eight have created or are in process. When we just created our pod, we did that like we've done most everything that we've done in this course so far via the command line. And by supplying parameters to that command line.

So now I wanna look at YAML. So YAML is a straightforward super set of JSON that you use to define the configuration details for Kubernetes pods and deployments, and we're much better off using configuration files versus trying to pass a ton of option to the kubectl CLI commands. So using YAML for Kubernetes definitions gives you a number of advantages and including just convenience where you no longer have to add all of your parameters command line, but also maintenance and flexibility where you'll have all of your configuration that is versionable.

You can put it in source control and you have the flexibility to kind of easily look at everything at once opposed to trying to look at a command line. So now what we're gonna do is we're gonna move on to expose external traffic to the pod that we've created. So Kubernetes services are a logical abstraction above a group of pods that organize them and define how to interact with them.

A good example to think about is, think of a lookup microservice that is replicated three times. So that means there's three pods in the mix, and then imagine that we have a front end that accesses our lookup functionality. That front end is not going to care which replica of our lookup it uses. It just wants the functionality, and the front end will interact with the service. Which will in tern handle the interaction with the application pod or pods, and this is all achieved by creating a service on top of our pods.

So now we're gonna move back over to our terminal window. We're gonna use the kubectl command expose command to expose our already created pod to external traffic. We're back in our terminal window, and the last time we were here, we actually created our Kubernetes pod. So now let's just do a really quick command to look at that pod. Make sure that it's running and available. Which it is.

So what we're gonna do next is we're gonna use the kubectl expose command to expose this pod so we can have external traffic able to get to it. Right now it's exposed only internally to GCP traffic. So we wanna be able to get to it now from the public internet. So we're gonna do that using the kubectl expose command, and what we're exposing is a pod, and we need to give it the pods name as the first parameter. So we're gonna copy and paste this over. We also need to give this service a name. First we're gonna give it a port. So we're gonna expose port 8080. Then we need to give this a name.

So we talked earlier about services being what handles the load balancing, would it exist at a layer above our nodes. So that's what we're creating right now. We're creating a service that is gonna be of the type load balancer. So we give it a name and we can just call this gkefrontend, and then we also need to, woops. I forgot a dash before name. That's dash, dash, name. Okay so gkefrontend and then we also need to give it a type, and this is gonna be in parentheses, LoadBalancer.

Okay so let's check this really fast before we execute it. We're exposing a pod, we give it the pod name. We also gave it the port. We're gonna give this service a name and the type of service that we're creating is a load balancer. So once we execute that, we are good. We now can look at our services and make sure that this is running. So that is going to be kubectl get services, and that's gonna give us a list. So now we see our GKE front end service is available, and as that services spins up, sometimes this can take several minutes.

As that service is finally spun up, we will then see an external IP where we now see pending. So it's just been a couple of minutes, and now we're ready to check in on our service to make sure that it's been created, and that, that public IP has been provisioned. So to do that we're just gonna list out our services. We're gonna use kubectl to do that. And we're just gonna get our services, and that's gonna list out all of our services and the properties associated with those. So now we see that we've got our GKE front end services that used to just have an internal IP, but it now has an external IP. So we're gonna double check this.

We're gonna copy our external IP. We're gonna go over to our browser and see if we get a successful response when we hit that IP on prot 8080. Which we do. We get hello Kubernetes back. Which shows that we do have a running pod now exposed to external traffic. Now that we've created our cluster, in the next section we're gonna go through some of the actions that we might take to manage that cluster.

About the Author

Students378
Courses2

Steve is a consulting technology leader for Slalom Atlanta, a Microsoft Regional Director, and a Google Certified Cloud Architect. His focus for the past 5+ years has been IT modernization and cloud adoption with implementations across Microsoft Azure, Google Cloud Platform, AWS, and numerous hybrid/private cloud platforms. Outside of work, Steve is an avid outdoorsman spending as much time as possible outside hiking, hunting, and fishing with his family of five.