1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Managing Google Kubernetes Engine and App Engine

Working with Pods, Services, and Controllers

The course is part of this learning path

Google Associate Cloud Engineer Exam Preparation
course-steps
10
certification
1
lab-steps
9
play-arrow
Start course
Overview
DifficultyAdvanced
Duration1h 9m
Students184
Ratings
3.7/5
starstarstarstar-halfstar-border

Description

Google Cloud Platform has become one of the premier cloud providers on the market. It offers the same rich catalog of services and massive global hardware scale as AWS, as well as a number of Google-specific features and integrations. Mastering the GCP toolset can seem daunting given its complexity. This course is designed to help people familiar with GCP strengthen their understanding of GCP’s compute services, specifically App Engine and Kubernetes Engine.

The Managing Google Kubernetes Engine and App Engine course is designed to help students become confident at configuring, deploying, and maintaining these services. The course will also be helpful to people seeking to obtain Google Cloud certification. The combination of lessons and video demonstrations is optimized for concision and practicality to ensure students come away with an immediately useful skillset.

Learning Objectives

  • Learn how to manage and configure GCP Kubernetes Engine resources
  • Learn how to manage and configure GCP App Engine resources

Intended Audience

  • Individuals looking to build applications using Google Cloud Platform
  • People interested in obtaining GCP certification

Prerequisites

  • Some familiarity with Google Kubernetes Engine, App Engine, and Compute Engine
  • Experience working with command-line tools

Transcript

Working with pods, services, and controllers. In the last lesson, we reviewed how to spin up a cluster and execute a simple deployment. By the end of it you had a simple app running, however, it probably all seemed a bit magical if you're not already very familiar with GKE and Kubernetes. In this lesson, we're going to break things down further to ensure you know what is really going on under the hood.

So let's start by talking about the pod we launched. Remember this is our workload, usually defined by a single container taken from an image repository. We used a sample image from a public GCP repository. We deployed it by running a create deployment command using kubectl, the Kubernetes command line tool. The deployment is the controller for a given pod or set of pods.

So why did we use a deployment controller at all? Why bother using a controller in the first place? Couldn't we just generate pods without that construct as overhead to deal with? Well, technically yes, we could just create pods, but this is bad practice. In general, with Kubernetes, we always want to use controllers when creating pods. This is, as the name implies, meant to give us more control. It reduces our maintenance and monitoring workload considerably to have a controller responsible for the state of pods. It lets us ensure that pods are healthy, that we have the right number, that the right networking is there, the right configuration, etc., etc. In general, in Kubernetes, we want to work with the highest level of abstraction possible, so we prefer to work with controllers instead of individual nodes or pods.

So, for now, we can get a bit more information about the pods that are running, by running this command: kubectl get pods. This will spit out some basic information about pods running in a cluster. We should see 1/1 running, meaning the cluster expects one pod and sees one pod. We see its age, we see its status, we see the number of restarts. We can now add more pods to this cluster a few different ways. We can use the web console by navigating to Kubernetes Engine UI. And there we just click on the Workloads button and from there we can click on Deploy to launch more pods. There is a default nginx pod we can do just to test that out. And by default, this will create three pods out of the same container. And again it will do this by using the deployment controller.

We can get information about our deployments by running this command: kubectl get deployments. This will show us our available deployment controllers. Here we aren't seeing nodes or pods, but rather deployments, which are kind of an abstraction level higher. So we will see the hello-server deployment which you might recall was the name we gave the deployment in the command. And also remember that a deployment is a type of controller, and crucially it can work with other controller types to both execute updates and ensure that our apps are in a desired healthy state. In this example, our pods are also running using the ReplicaSet controller.

So we can actually see this by running a command: run kubectl get replicasets. And we should see both the hello-server and the optional nginx-1 application if you launched it. They should both pop up since they're both ReplicaSets. Now if we were to run a similar get command for DaemonSets or StatefulSets, kubectl get daemonsets, or something like that, it will return nothing, because the pods we have launched so far do not use those controllers.

ReplicaSets work really great for stateless apps that are easily distributed across a set of nodes. Recall that DaemonSets are designed for scenarios when we want to ensure that all or a specified set of nodes run a copy of the pod. And when we need that mapping of pods to nodes. When we care about hardware-level mapping, basically, that's when we use a DaemonSet, and when we care about state, pod deployment order and persistent storage, then we will want to use a StatefulSet.

So now let's talk a little bit about services. If we run kubectl get services, the only thing that should come up is something named Kubernetes, initially. This is the default service for the cluster that is unrelated to the pods that we have created. So how do we go about exposing our pods running as ReplicaSets to the public internet? Well to do that we need to create services, and this will create generating some additional config. We can create that config as a YAML and run a kubectl apply, run that command to execute that config in the cluster. We could also run a gcloud or kubectl command with some arguments. Or, easiest of all, we can go into our GKE console and click on Workloads. And from there we just click on the pod we care about, for example hello-server, and then just click the blue Expose button to generate a service. Once we do this it will ask for a port mapping before assigning an external IP. Now by default GKE will create a load balancer type service, but recall that there are other options here such as a ClusterIP or a NodePort service. 

So pat yourself on the back. You now know enough Kubernetes and GKE to really kick some butt. You should have a sufficient grasp of pods, services, controllers to not only spin up a cluster, but also to deploy resilient services using appropriate controllers. To really lock in our understanding though, we need to see all of this in action. That's what the next section, the video demonstration, is designed to do. See you there.

 

Lectures

Introduction - Section One Introduction - Kubernetes Concepts - Cluster and Container Configuration - Working with Pods, Services, and Controllers - Kubernetes Demo - Section Two Introduction - Creating a Basic App Engine App - Configuring Application Traffic - Autoscaling App Engine Resources - App Engine Demo - Conclusion

About the Author

Students4822
Courses7

Jonathan Bethune is a senior technical consultant working with several companies including TopTal, BCG, and Instaclustr. He is an experienced devops specialist, data engineer, and software developer. Jonathan has spent years mastering the art of system automation with a variety of different cloud providers and tools. Before he became an engineer, Jonathan was a musician and teacher in New York City. Jonathan is based in Tokyo where he continues to work in technology and write for various publications in his free time.