1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Google Cloud Platform: Fundamentals

Kubernetes Engine

The course is part of these learning paths

Google Cloud Platform for System Administrators
course-steps 3 certification 1 lab-steps 4 quiz-steps 3
Google Cloud Platform for Solution Architects
course-steps 3 certification 1 lab-steps 4 quiz-steps 3
Google Cloud Platform Fundamentals
course-steps 2 certification 1 lab-steps 3 quiz-steps 2
Introduction to the TOP Public Cloud Platforms
course-steps 3 certification 1
Google Cloud Platform for Developers
course-steps 2 certification 1 lab-steps 2 quiz-steps 2
more_horiz See 4 more

Contents

keyboard_tab
Summary
play-arrow
Start course
Overview
DifficultyIntermediate
Duration2h 13m
Students5600
Ratings
4.8/5
star star star star star-half

Description

Google Cloud Platform: Fundamentals

If you’re going to work with modern software systems, then you can escape learning about cloud technologies. And that’s a rather broad umbrella. Across the three major cloud platform providers, we have a lot of different service options, and there’s a lot of value in them all.

However, the area that I think Google Cloud Platform excels in is providing elastic fully managed services. Google Cloud Platform to me, is the optimal cloud platform for developers. It provides so many services for building out highly available - highly scalable web applications and mobile back-ends.

For me personally, Google Cloud Platform has quickly become my personal favorite cloud platform. Now, opinions are subject, but I’ll share why I like it so much.

I’ve worked as a developer for years, and for much of that time, I was responsible for getting my code into production environments and keeping it running. I worked on a lot of smaller teams where there were no operations engineers.

So, here’s what I like about the Google Cloud Platform, it allows me to think about the code and the features I need to develop, without worrying about the operations side because many of the service offerings are fully managed.

So things such as App Engine allow me to write my code, test it locally, run it through the CI/CD pipeline, and then deploy it. And once it’s deployed, for the most part, unless I’ve introduced some software bug, I don’t have to think about it. Google’s engineers keep it up-and-running, and highly available. And having Google as your ops team is really cool!

Another thing I really like about is the ease of use of things such as BigQuery and their Machine Learning APIs. If you’ve ever worked with large datasets, you know that some queries take forever to run. BigQuery can query massive datasets in just seconds. Which allows me to get the data I need quickly, so I can move on to other things.

And with the machine learning APIs, I can use a REST interface to do things like language translation, or speech to text, with ease. And that allows me the ability to integrate this into my applications, which gives the end-users a better user experience.

So for me personally, I love that I can focus on building out applications and spend my time adding value to the end-users.

If you’re looking to learn the fundamentals about a platform that’s not only developer-friendly but cost-friendly, then this is the right course for you!

Learning Objectives

By the end of this course, you'll know:

  • The purpose and value of each product and service
  • How to choose an appropriate deployment environment
  • How to deploy an application to App Engine, Kubernetes Engine, and Compute Engine
  • The different storage options
  • The value of Cloud Firestore
  • How to get started with BigQuery

Prerequisites

This is an intermediate-level course because it assumes:

  • You have at least a basic understanding of the cloud
  • You’re at least familiar with building and deploying code

Intended Audience

  • Anyone who would like to learn how to use Google Cloud Platform

 

Transcript

Hello and welcome to Introduction to Kubernetes Engine where we'll be exploring Google Cloud's fully managed Kubernetes Cluster Service. By the end of this lesson, you'll be able to explain the purpose of a Kubernetes Cluster, as well as explaining the concepts of workloads, services, and volumes. And you'll finally be able to explain the purpose of Kubernetes Engine. This lesson is going to be mostly conceptual with a brief demo at the end. 

Kubernetes Engine is a great service however, it can be difficult for beginners to learn. And here's why I think that. If I tell someone who's new to these technologies that the purpose of Kubernetes Engine is to simplify the running of Kubernetes Clusters, they'll probably look at me funny. They'll ask something like, what's a Kubernetes? And that is a reasonable question to ask. 

In order to understand Kubernetes Engine, you have to know a bit about Kubernetes. And that requires an understanding of containers, container runtimes, and probably container registries. So there are a lot of prerequisites for learning about GKE. If you're not familiar with these topics don't worry, we'll do a quick crash course to get you primed. If you are familiar with all of these things then maybe you just wanna skim through the transcript and see if any new info jumps out. 

Okay, if you're still here, then let's get started. Deploying applications and keeping them running is not easy. Different applications have different requirements. They're developed with different technologies, deployed to different operating systems, and require different server resources. Containers such as Docker can help with some of these challenges by allowing developers to package applications and their dependencies. These packages as it were are called container images. Images can be used to run the package code by creating container instances based on the image. And in order to actually create an instance, you need a container runtime. To make sharing containers easier, you can upload them to a container registry of which there are many including Google Cloud's Container Registry Service. 

Containers aren't like virtual machines, they're just isolated processes. They share the kernel of the host OS which means they start up very quickly. Having applications inside of containers makes it easy to standardize deployments. Because now the only thing you have to think about deploying is a container. This is where Kubernetes enters the scene. Kubernetes sometimes called K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes abstracts away the underlying host OS. Engineers wanting to deploy applications can tell Kubernetes which container images to run, how many instances to run, how much CPU and memory to use, and all kinds of additional details. And it's up to Kubernetes to figure out how to make that happen. 

Kubernetes is not just one thing, it's a collection of components which altogether form a Kubernetes Cluster. A cluster can be divided into let's say two logical parts, the control plane and the nodes. 

The control plane is the brains of the operation. It consists of an API which engineers interact with through a command-line tool called kubectl. The API processes requests and stores the cluster state in an open-source key-value database called ETCD. The controller manager handles cluster-level functionality such as certain garbage collection tasks, and when engineers want to actually run a container it's the scheduler's job to select which nodes to use. 

Which brings us to nodes. Nodes are servers that Kubernetes can use to run containers. The way it does that is each node runs an agent which knows how to do let's say node-related tasks. That's tasks such as starting up containers et cetera. In order for a node to run a container, it needs a container runtime, and in order to distribute traffic to containers, there's a process called kube-proxy. Some applications might consist of multiple containers running together. 

So, Kubernetes abstracts containers into a concept called pods. A pod is just one or more containers which need to be deployed together on the same node. So basically a pod is the smallest unit of deployment, and the contents of containers are referred to as workloads. Pods are going to run your workloads. And the useful thing about Kubernetes is it knows how to work with containers. 

So, if you are running a batch processing job or if you're running some sort of application it really doesn't matter, whatever workload you wanna run inside of a container Kubernetes can work with. If you were to manually deploy an individual pod it would be similar to starting a process on your computer in that when the process stops, it's done. It's not going to start up, there's nothing monitoring it to make sure it stays running. And if you start a pod manually, it's the same basic idea. 

Using pods for maybe Ad-Hoc commands that you wanna run is useful, however, if you have some sort of real workload you wanna make sure that those processes stay up and running. Which is why Kubernetes provides mechanisms that can manage pods which we call controllers. So there are different types of controllers, and they're all able to create and manage pods based on their own use case. 

For example, a deployment can create multiple replicas of a container, and if one of the replicas stops it can start up a new one. Once you deploy a workload either by creating a pod yourself or through a controller, each pod is given an IP address which you could use in theory to interact with that pod. However, pods don't live forever they're short-lived. So the way you interact with pods or replicas over the network is that you create a service, which can serve as a single entry point for pods. 

Now there are multiple types of service depending on how you wanna interact with the pods. By default, there's the Cluster IP service which provides an IP address for the pods to anything inside of the cluster. So you're allowing network connectivity to this set of pods for anything on your network. So to summarize, services are basically a single entry point to multiple pods. 

So now let's pivot to talk about storage. If you start a container instance, you write some data to disk and you terminate that instance and then start a new one, that data is gone. That's because containers use ephemeral storage by default. Each instance of a container is only going to last as long as the life cycle of that instance. So that means we need some other mechanism in order to persist data. And for that, we can use Persistent Volumes. 

Now there are different types of volumes for different use cases, including Secrets and ConfigMaps as well as Persistent Volumes. But just know volumes provide access to storage for pods. All right. So that's a high-level primer. 

Now, if Kubernetes provides all of that out of that box, what does Kubernetes Engine provide? Remember earlier I said the purpose of Kubernetes Engine is to simplify the running of Kubernetes Clusters. So how do they do that? Kubernetes Engine allows us to create clusters that Google will manage. That means Google has to keep our clusters running, updated and upgraded. To do this Google manages the entire control plane, as well as node maintenance. 

Now we can select things like the machine type, the operating system et cetera, and we can group nodes into categories called pools, which allow us to ensure workloads run on specific nodes. So basically GKE allows engineers to use Kubernetes without spending all of the time required to keep a Kubernetes Cluster running. 

Now when you start to interact with Kubernetes the way that you'd typically do that is through the Kubernetes API, and the typical means for that is the use of the kubectl command-line application. 

Okay with all of this in mind let's check out a demo in the console. Here I am in the GKE dashboard and I've already enabled the API. On the left-hand side navigation you can see some of the concepts that we've already covered. A cluster is our starting point so let's create a cluster. This form here is attempting to help us create a cluster based on a template, and these are just some preset values for this form. If you don't know which option to use my recommendation is to go with your first cluster option. 

Notice here, the standard selection includes three nodes and that's because production clusters are going to end up having at least three nodes to maintain availability. The first cluster template here, it uses an individual instance, which is small enough of a machine that it's not really going to break the bank. I'll leave all of these set to default however take note, you can change the location so that it runs in multiple zones for higher availability. 

This master version list here will change over time as well so don't worry if your values are different when you go into the console and you see some different options here. By clicking create I'm going to get a single node cluster, which can take a while to build which is why I created a cluster previously, so we'll let this build while we switch over to this one. And drilling into the cluster shows details about the cluster. We can manage a cluster's storage, add nodes, and we can deploy workloads. Recall that a workload is a containerized application. 

Container images live in container registries, and Kubernetes Engine makes it easy to use the Google Cloud Container Registry, the default Docker Registry, or it can even build a container based on a Docker file. For our demo, we're going to use the official NGINX image from Docker, and the reason this is useful is we can actually see it running in the browser so it makes it easy to demonstrate. 

Alright, we don't need additional container images so let's continue. And this configuration is going to create pods using a deployment controller. And this is going to create three replicas of NGINX pods to ensure that they are always running. 

All of this is functionality of Kubernetes, not GKE. So we're not gonna dive into too much depth on these, just know that this is going to deploy our NGINX workload as three managed pods. And in case you're wondering how I know that it's three pods here, it's defined in this YAML configuration here. The Kubernetes API uses YAML files to determine what do do, so this is specifying that we should have three replicas of this container, it's also saying that it can scale down to as few as one instance, and up to as high as five. 

Clicking on deploy is going to download the container image and get our replicas running. And here it is. There are three managed pods all running and ready to serve traffic. Now if you recall from earlier, it's services that allow us to have a single entry point to interact with our pods. So let's click on the expose button up here to create a service. Services require us to set a protocol, a port, an optional target port that is used by the container if it's going to be different from this original port here, the service type, and the service name. I'm going to use a load balancer type on port 80. 

Now, this is going to expose these pods to the world with a public external IP address. This uses the Google Cloud Platform with Global Load Balancer behind the scenes. Okay, creating this is going to take a moment, however, once it's done we'll have an external IP address. 

And, drumroll please. Here it is. This service is used as a load balancer for the NGINX app deployment, which is the deployment we created with three replicas. Now clicking on the external endpoint should open this up in another tab, and here we have our NGINX landing page. So browsing to this IP address automatically distributes the traffic across our replicas. 

Let's review the final few features here. The applications section here uses the marketplace to allow us to deploy containerized functionality to our cluster. And configurations displays cluster configuration data including Secrets and ConfigMaps. Storage displays the Persistent Volumes that we have in use in our cluster, and that's just about it. 

Let's stop here and see how we did towards our learning objectives. We know that a Kubernetes Cluster is the culmination of all of the components required to use Kubernetes including the control plane and nodes. A workload is a containerized application. A service is used to provide network access to pods. Volumes are storage mechanisms for pods which have different types including Persistent Volumes, Secrets et cetera. And finally, the purpose of Kubernetes Engine is to simplify the running of Kubernetes Clusters. 

Okay, that's gonna wrap up this lesson. I know this was a dense lesson. There is a lot to learn with Kubernetes. I hope this at least filled in some of the blanks and gave you a sense for what's Kubernetes functionality and what is Kubernetes Engine. Thank you so much for watching, and I'll see you in the next lesson.

About the Author

Students37142
Courses16
Learning paths15

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.