The course is part of these learning pathsSee 9 more
Kubernetes has become one of the most common container orchestration platforms. Google Kubernetes Engine makes managing a Kubernetes cluster much easier by handling most system management tasks for you. It also offers more advanced cluster management features. In this course, we’ll explore how to set up a cluster using GKE as well as how to deploy and run container workloads.
- What Google Kubernetes Engine is and what it can be used for
- How to create, edit and delete a GKE cluster
- How to deploy a containerized application to GKE
- How to connect to a GKE cluster using kubectl
- Developers of containerized applications
- Engineers responsible for deploying to Kubernetes
- Anyone preparing for a Google Cloud certification
- Basic understanding of Docker and Kubernetes
- Some experience building and deploying containers
So the primary function of Kubernetes is to run containerized applications. Now I say “containerized applications” instead of “containers” for a reason. An application might be represented with a single container. However, this is not always the case. Some applications might be composed of multiple containers. So in the Kubernetes world, we don’t typically talk about “containers”. Instead, we usually talk about pods and workloads.
In Kubernetes, the closest thing you have to a container is called a “pod”. A pod is basically one or more containers bundled together, along with the specification for how to run them. Now most pods only have a single container. So if it helps, you can think of the terms “pod” and “container” as mostly interchangeable. However, you should be aware that it is possible for a pod to contain several containers. The containers in a pod have shared storage and network resources. This is useful for when you have several containers that cannot be run independently.
In Kubernetes, you generally do not directly create pods. Instead, you define workloads and let those create the pods for you. A workload represents an application that runs on Kubernetes. Workloads are objects that define the deployment rules for pods. So you can define things like: which pods should be deployed, how many copies should be created, and how they should run. Each workload can have different hardware requirements. Some might require a higher CPU or more memory, and others could require something special like access to a GPU.
So this might be a little confusing at first. You want to run containers, but Kubernetes wants you to work with pods and workloads instead. This abstraction was necessary so that Kubernetes could be as flexible as possible. In the beginning, you will probably be creating workloads that involve a single pod and a single container. However, it does allow you to do things like run a pod on multiple nodes so that you have multiple copies of your container. And this provides greater redundancy and scalability.
Just remember, when working with a Kubernetes cluster, you are going to be deploying workloads. A workload represents an application you wish to run and can be composed of one or more pods. A pod contains one or more containers that share storage and networking.
Again, this is just scratching the surface. But you should now have enough Kubernetes knowledge to get started working with GKE.
Daniel began his career as a Software Engineer, focusing mostly on web and mobile development. After twenty years of dealing with insufficient training and fragmented documentation, he decided to use his extensive experience to help the next generation of engineers.
Daniel has spent his most recent years designing and running technical classes for both Amazon and Microsoft. Today at Cloud Academy, he is working on building out an extensive Google Cloud training library.
When he isn’t working or tinkering in his home lab, Daniel enjoys BBQing, target shooting, and watching classic movies.