image
Deploying Workloads
Start course
Difficulty
Intermediate
Duration
34m
Students
363
Ratings
5/5
starstarstarstarstar
Description

Kubernetes has become one of the most common container orchestration platforms. Google Kubernetes Engine makes managing a Kubernetes cluster much easier by handling most system management tasks for you. It also offers more advanced cluster management features. In this course, we’ll explore how to set up a cluster using GKE as well as how to deploy and run container workloads.

Learning Objectives

  • What Google Kubernetes Engine is and what it can be used for
  • How to create, edit and delete a GKE cluster
  • How to deploy a containerized application to GKE
  • How to connect to a GKE cluster using kubectl

Intended Audience

  • Developers of containerized applications
  • Engineers responsible for deploying to Kubernetes
  • Anyone preparing for a Google Cloud certification

Prerequisites

  • Basic understanding of Docker and Kubernetes
  • Some experience building and deploying containers
Transcript

In this lesson, I am going to show you how to deploy a workload to a GKE cluster.  I have already created my cluster.  Let me show that.  I just have to navigate to the Kubernetes Engine page.  You can see that I called my cluster “demo-cluster-1”.  And I also can connect to it via kubectl.  You can see that I have not deployed anything to it yet.

So there are two main ways to deploy a workload.  First, you can use the Google web console if you wish.  To do that, click on “Workloads” in the side menu here.  Notice I don’t have any workloads deployed yet, so I get the default screen.  Then just click on “Deploy” and you are given a very simple way to deploy a container.  So pick your container image here.  By default, It provides a default NGINX web server container you can use for testing.  But you can pick a custom image stored in a Cloud Source Repository or Github or Bitbucket.

Then you set any environment variables you need for the container.  Sometimes you have to pass in port numbers or a password.  Whatever the container needs to run.  Then you set an application name to help identify it.  This one is going to be called “nginx-1”.  Here you can view the workload configuration that was automatically generated.  Notice that it is going to create three replicas or pods.  And then finally I just have to pick the cluster and then click on “Deploy”.  

So this does work, but you won’t typically use this technique since it gives you very few options.  Normally you would want to create your own configuration file.  But this does provide a quick and dirty way to do a deployment.  It looks like the deployment succeeded.  And we see it created three pods on demo-cluster-1 called “nginx-1”.

Next, let me show you the much more typical method for doing a deployment.  For this, we will be using the kubectl command line tool.  I can deploy the same container by using this command.  So this deployment will create a single pod.  And it will be called “nginx-2”.  Now if I list out all pods I can see the three instances of nginx-1 and the single instance of nginx-2.  Also, if I go back to the web console, I can refresh the screen and see both deployments.

So whether you deploy via the command line or the web console, it doesn’t really matter.  You can view the current status of everything either way.  And you can delete deployments using either method as well.  I can remove a deployment using the web console like this.  And I can remove a deployment using kubectl like this.

So there you go.  That’s a very simple example of how to deploy containers to your cluster.  I recommend reading up on all the appropriate kubectl commands if you are not already familiar with them.  Normal deployments will require a few extra steps and a bit more configuration work.  But I will leave that as something you can pick up and experiment with on your own.

About the Author
Students
32108
Courses
36
Learning Paths
15

Daniel began his career as a Software Engineer, focusing mostly on web and mobile development. After twenty years of dealing with insufficient training and fragmented documentation, he decided to use his extensive experience to help the next generation of engineers.

Daniel has spent his most recent years designing and running technical classes for both Amazon and Microsoft. Today at Cloud Academy, he is working on building out an extensive Google Cloud training library.

When he isn’t working or tinkering in his home lab, Daniel enjoys BBQing, target shooting, and watching classic movies.