The course is part of these learning paths
See 9 moreKubernetes has become one of the most common container orchestration platforms. Google Kubernetes Engine makes managing a Kubernetes cluster much easier by handling most system management tasks for you. It also offers more advanced cluster management features. In this course, we’ll explore how to set up a cluster using GKE as well as how to deploy and run container workloads.
Learning Objectives
- What Google Kubernetes Engine is and what it can be used for
- How to create, edit and delete a GKE cluster
- How to deploy a containerized application to GKE
- How to connect to a GKE cluster using kubectl
Intended Audience
- Developers of containerized applications
- Engineers responsible for deploying to Kubernetes
- Anyone preparing for a Google Cloud certification
Prerequisites
- Basic understanding of Docker and Kubernetes
- Some experience building and deploying containers
Google Kubernetes Engine (or GKE for short) is a fully managed Kubernetes service. That means there is no need to provision a bunch of VMs and manually install Kubernetes yourself. Everything is already set up and ready to go. Plus, by using GKE, your Kubernetes operations are greatly simplified. GKE takes responsibility for most system management tasks, and it provides you with advanced cluster management features, including automatic node scaling, repairing, and upgrading.
So hopefully you already understand the basics about what Kubernetes is and how it works. Just in case you don’t, in this section I am going to do a review of basic Kubernetes principles. Now I will be glossing over a lot of details. So if you want something more in-depth, you might want to look at a dedicated Kubernetes course. However, this lesson should provide enough information to get you started.
Kubernetes is an open-source container orchestration system. That means it is designed for running many different containerized applications at once. So if you just want to run one or two containers, then Kubernetes is probably overkill. However, it is extremely useful for building and maintaining complicated microservice architectures that can require hundreds or thousands of containers. So if you are developing something like that, then Kubernetes can make deploying and managing your apps much easier.
To provide better scaling, Kubernetes uses a distributed system. That means your containers are deployed onto special virtual machines called “nodes”. And these nodes are grouped together into clusters. So instead of trying to run everything together on a single machine, clusters spread out your containers across different nodes. This architecture provides many benefits. If a single node crashes or becomes unresponsive, then you still have other nodes that can take over. This also means if you need to make a change or perform an update, you can roll that out one node at a time to avoid service disruption or downtime. And of course, you can easily add or remove nodes as needed.
You can think of a cluster as being somewhat similar to a managed instance group. To run a single instance of a single application, you can use a single VM. But if you need to run multiple applications or multiple instances of the same application, then a managed instance group would be better. A Kubernetes cluster works in much the same way. A cluster is a collection of many different nodes. In GKE, each node is a VM that is optimized for running containers.
So it is up to you to choose how many clusters you need. It is pretty common to create a cluster for each environment. For example, you may want to have a “production” cluster for running your live systems, and a “staging” cluster to use for testing. Some people also create different clusters for different projects or teams. The choice is up to you. Generally, you will group containers together in the same cluster when they rely upon each other to work.
Now a Kubernetes cluster consists of many different components. Understanding everything going on in there is pretty complicated. However, at a high level, we can split a cluster into two main pieces: the control plane and the worker nodes.
As I mentioned before, the nodes are used to run containerized applications. Every cluster has at least one worker node, but usually many more. To help you manage all those nodes, you also have the control plane. The control plane is what orchestrates all the work in a cluster. Essentially, you tell the control plane what to do, and it figures out how to split up the work among the nodes. The control plane handles all internal communication, and it monitors the health of the nodes so that it can add or remove nodes, as needed. As well as upgrade or repair them when there is a problem.
The control plane contains a set of APIs that Kubernetes users and nodes interact with. It is the single source of truth regarding the cluster’s state. When you want to make any change or check the status of the cluster, you talk to the control plane, not the nodes.
So to summarize: Kubernetes is a system used to orchestrate running many different containers at once. You use it to create clusters, which contain a control plane and some worker nodes. The control plane provides the interface to the cluster, so you can make changes and check the current status. The nodes are used for running the containers and doing the actual work.
Now as I previously mentioned, there is a lot more to Kubernetes. But this is as deep as we need to go for this course.
Daniel began his career as a Software Engineer, focusing mostly on web and mobile development. After twenty years of dealing with insufficient training and fragmented documentation, he decided to use his extensive experience to help the next generation of engineers.
Daniel has spent his most recent years designing and running technical classes for both Amazon and Microsoft. Today at Cloud Academy, he is working on building out an extensive Google Cloud training library.
When he isn’t working or tinkering in his home lab, Daniel enjoys BBQing, target shooting, and watching classic movies.