1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Managing Google Kubernetes Engine and App Engine

Kubernetes Concepts

The course is part of these learning paths

Google Associate Cloud Engineer Exam Preparation
Google Professional Cloud Architect Exam Preparation
Start course
1h 9m

Google Cloud Platform has become one of the premier cloud providers on the market. It offers the same rich catalog of services and massive global hardware scale as AWS, as well as a number of Google-specific features and integrations. Mastering the GCP toolset can seem daunting given its complexity. This course is designed to help people familiar with GCP strengthen their understanding of GCP’s compute services, specifically App Engine and Kubernetes Engine.

The Managing Google Kubernetes Engine and App Engine course is designed to help students become confident at configuring, deploying, and maintaining these services. The course will also be helpful to people seeking to obtain Google Cloud certification. The combination of lessons and video demonstrations is optimized for concision and practicality to ensure students come away with an immediately useful skillset.

Learning Objectives

  • Learn how to manage and configure GCP Kubernetes Engine resources
  • Learn how to manage and configure GCP App Engine resources

Intended Audience

  • Individuals looking to build applications using Google Cloud Platform
  • People interested in obtaining GCP certification


  • Some familiarity with Google Kubernetes Engine, App Engine, and Compute Engine
  • Experience working with command-line tools

Review of Kubernetes Concepts. If you're coming from the more introductory, "Planning and Configuring" GCP course, then you recall its basic overview of GKE. The goal there was really to just show how it works with enough information to get you started. In this course, we want to get you ready to be responsible for maintaining a GKE cluster in a real production environment. So this will require a deeper dive into how GKE works, and before we do that, we want to ensure you have a deep understanding of Kubernetes concepts. If you're already are a Kubernetes expert and you wanna just dive into GKE, feel free to skip this lesson.

So let's start by talking about what Kubernetes is at a high level. Stated most succinctly, Kubernetes is a system for orchestrating containerized applications. So if you are packing your software application using something like Docker, and wish to deploy and manage those Docker containers, Kubernetes gives you the tools to abstract away most hardware and networking considerations to make managing your system much easier.

Consider the basic resources needed to deploy a Dockerized application. The Docker container needs a server to run on, so that means compute resources. It needs a certain amount of CPU and memory. It also needs network access, port configuration, firewalls, load balancing, DNS configs. The app may need to talk to other backend services, or it may be part of a larger system of microservices involving several other Docker apps. Managing all of this complexity manually is very difficult. We'd need to create all sorts of scripts and documentation for each server, each network setup, each application's hardware needs, config files, etc., etc. Kubernetes, like other orchestration frameworks such as Docker Swarm or Mesos Marathon, it makes all of this work much easier.

It starts with the concept of a cluster. A Kubernetes cluster is a complete set of resources for an application environment. Hardware resources. So in general, this will be confined to a single data center and will comprise a number of servers and network interfaces. Storage is also a possible resource here as Kubernetes can create ephemeral and persistent volumes. The servers, whether physical machines, VMs running in EC2, or Google Cloud Compute, or somewhere else, they are referred to as nodes. Servers are nodes. A Kubernetes cluster may, for example, run on three nodes, three virtual servers, that will host your container-based applications. Having multiple nodes grants redundancy in case of hardware failure and it also makes scaling up or down easier. Each node will run a kubelet, a lightweight Kubernetes agent, that allows it to communicate with other nodes and stay in sync regarding cluster configuration and help.

A Kubernetes cluster with just nodes, however, is not really doing anything useful. In order to run a container, the cluster must have access to an image repository and it has to create pods. Now, a pod is a basic workload unit in Kubernetes. Often, a pod is an instance of a single container, however, it can also be comprised of multiple containers. A pod will also have a unique IP address within the network as well as storage resources based on its config. This is the smallest unit of what we might think of as a microservice in a software as a service architecture. So for example, you might have a simple stateless Python app run as a pod in your Kubernetes cluster. And we'll dig in deeper on the container configuration and cluster setup in the next section.

So now, with a basic understanding of clusters, nodes, and pods, there are only two other core concepts of Kubernetes that you need to really get started working with GKE. And these two are Controllers and Services. Now, we'll link to the Kubernetes documentation if you wanna really go deeper on all the other terminology. That's a bit out of scope for this class, we're just doing a basic conceptual overview.

But let's diagram there, let's start with Controllers. You can think of a Kubernetes Controller as a control loop. This is a basic CS concept. It's a tool for maintaining a certain desired state. In Kubernetes, this refers to an API that manages a pod or set of pods by preserving a pre-set configuration. So there are actually a few different types of Controllers. One of the most basic is the ReplicaSet, which simply guarantees that a certain number of copies of a given pod are kept running. So for example, let's say we have our stateless Python app running as three pods across the nodes in our cluster. If they are running as a ReplicaSet, then Kubernetes will make sure that three pods are always running. If there is a failure, a crash, an error of some kind that causes a pod to die, the ReplicaSet controller will try to bring up a replacement.

Now, there are other Controller types for different purposes. There is the DaemonSet controller which is meant to guarantee a specific distribution of pods on each node. This controller is useful for hardware monitoring, logging applications that need a one-to-one mapping to servers for whatever reason. There's also the StatefulSet controller designed for stateful applications. This controller provides guarantees about pod ordering, uniqueness, and stable storage. Finally, there is the Deployment Controller. This is designed to work with other Controllers, such as ReplicaSets. The Deployment Controller, as the name implies, is designed for declarative updates to a set of pods. It handles transitioning a set of pods from its current state to a defined desired state.

So there's more we could add about Controllers but this is enough to get the basic idea. Again, don't hesitate to dig through the Kubernetes documentation for more details about each specific Controller if you're interested. But for now, let's move on and talk about Services.

Services are very important. Services are Kubernetes' way of exposing pods to external networks including the public internet. So if we wanna make our application reachable from a browser, we're going to need a Service to set up the IP address and DNS name. The basic configuration needed is a network protocol, such as TCP as well as ports and some metadata such as a Service name. Now as with Controllers, there are a few different types of Services. There are ClusterIP services that only expose an internal IP address, suitable for apps that don't need to be accessed from the public internet. Then you have NodePort services that expose the node's IP address on a specific port. This might make sense depending on your firewall setup. You may not want your Kubernetes nodes, the actual VMs, to expose their IP addresses, even if its only on a specific port. And then, you also have LoadBalancer and ExternalName services, both of which work more closely with your cloud provider. The former, the LoadBalancer, it works with your provider's LoadBalancer resources to expose a set of pods while the latter, the ExternalName type returns a CNAME record based a DNS name of your choosing.

So Controllers let us turn sets of running containers into resilient, updateable applications with predictable behavior. And Services let us control access to those applications by publishing them in various ways. Now, if you understand these five terms, cluster, node, pod, controller, service, then you now know enough Kubernetes to get your hands dirty. In the next short lesson, we're gonna do just that. We'll review setting up a cluster and preparing a container for deploy using GKE. It's gonna be a blast. We'll see you there.



Introduction - Section One Introduction - Kubernetes Concepts - Cluster and Container Configuration - Working with Pods, Services, and Controllers - Kubernetes Demo - Section Two Introduction - Creating a Basic App Engine App - Configuring Application Traffic - Autoscaling App Engine Resources - App Engine Demo - Conclusion

About the Author
Jonathan Bethune

Jonathan Bethune is a senior technical consultant working with several companies including TopTal, BCG, and Instaclustr. He is an experienced devops specialist, data engineer, and software developer. Jonathan has spent years mastering the art of system automation with a variety of different cloud providers and tools. Before he became an engineer, Jonathan was a musician and teacher in New York City. Jonathan is based in Tokyo where he continues to work in technology and write for various publications in his free time.