Kubernetes has become one of the most common container orchestration platforms. It has regular releases, a wide range of features, and is highly extensible. Managing a Kubernetes cluster requires a lot of domain knowledge, which is why services such as GKE exist. Certain aspects of a Kubernetes cluster vary based on the underlying implementation.
In this course, we’ll explore some of the ways that GKE implements a Kubernetes cluster. Having a basic understanding of how things are implemented will set the stage for further learning.
- Learn how Google implements a Kubernetes cluster
- Learn how GKE implements networking
- Learn how GKE implements logging and monitoring
- Learn how to scale both nodes and pods
- Engineers looking to understand basic GKE functionality
To get the most out of this course, you should have a general knowledge of GCP, Kubernetes, Docker, and high availability.
Hello and welcome. My name is Ben Lambert and I'll be your instructor for this course. This course is called Introduction to GKE and as you might expect we'll be covering some of the core functionality provided by GKE. This course is going to be largely conceptual with demonstrations where it makes sense. The goal for this course is to introduce you to some of the features and functionality that is provided by GKE.
Kubernetes has become the default container orchestration platform for many engineers. It offers a lot of features, it's actively maintained, and most cloud vendors provide a managed implementation. Google's managed implementation is called Google Kubernetes Engine or simply GKE.
During this course, we'll start out with a high-level review of the GKE architecture and this will provide us a baseline for the rest of the course. After that, we'll talk about what's involved in the cluster creation process so that we understand some of the different settings required to create a cluster. The point of the cluster is to deploy workloads which is why the following lesson will cover workloads as well as a brief review of node pools. The following lesson will cover pod networking on GKE which does have its own GKE-specific implementation. After that, we're going to talk about how GKE implements volumes and persistent volumes. We'll follow that up with a high-level review of some of the different security aspects, then we'll move on to cover some of the different aspects of logging and monitoring and then on to a high-level review of scalability. After all that we'll cap the course off with a brief demo in the console to kind of showcase some of the different functionality when learning about a managed service such as GKE. There's a lot of knowledge overlap with the base implementation which means we'll be covering different Kubernetes functionality as well as GKE functionality.
Now, the prime focus is GKE, though to really teach someone about GKE it is difficult to separate the two so before taking this course there are some prerequisites which include an understanding of Kubernetes. And understanding Kubernetes implies that you should be familiar with Docker, you should be familiar with Google Cloud's base services such as Compute Engine and you should be familiar with concepts surrounding high availability. Engineers of different job roles may interact with GKE in different ways. Being an introductory course, we're not going to approach this from the lens of a specific role, so that means this course is intended for any engineer who wants a better understanding of GKE. So if you're interested in learning more about GKE then I will see you in the next lesson!
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.