1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Kubernetes Architecture


Course Introduction
Deploying Containerized Applications to Kubernetes
12m 29s
5m 49s
9m 16s
14m 3s
The Kubernetes Ecosystem
Course Conclusion

The course is part of these learning paths

Building, Deploying, and Running Containers in Production
Introduction to Kubernetes
more_horizSee 2 more
Start course
Duration2h 12m


Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

The source files used in this course are available in the course's GitHub repository.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers


You should be familiar with:

  • Working with Docker and be comfortable using it at the command line


August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics



This lesson will cover Kubernetes architecture. What we cover here will be enough to understand and reason about topics we'll learn later in the course. It is intended to build a strong foundation rather than to be an exhaustive review. 

Kubernetes itself is a distributed system. It introduces its own dialect to the orchestration space, Internalizing the vernacular is an important part of success with Kubernetes. We will define terms as they arise but note that there is also a Kubernetes glossary available in the Introduction to Kubernetes learning path for a single point of reference for all the terms you need to know. A more comprehensive glossary maintained by Kubernetes is also linked from there. 

You must also understand the architecture to have a basic understanding of how features work under the hood. The Kubernetes cluster is the highest level of abstraction to start with. Kubernetes clusters are composed of nodes. The term cluster refers to all the machines collectively and can be thought of as an entire running system. The machines in the cluster are referred to as nodes. A node may be a virtual machine or a physical machine. Nodes are characterized as worker nodes and master nodes. Each worker node includes software to run containers managed by the Kubernetes control plane. Control plane itself runs on master notes. 

The control plane is a set of APIs and software that Kubernetes users interact with. These APIs and software are collectively referred to as master components. The control plane schedules containers onto nodes. The term scheduling does not refer to time in this context. Think of it from a kernel perspective. The kernel schedules processors onto the CPU according to multiple factors. Certain processes need more or less compute or have different Quality of Service rules. Ultimately the scheduler does its best to ensure that every container runs. Scheduling in this case refers to the decision process of placing containers onto nodes in accordance with their declared compute requirements. 

In Kubernetes containers are grouped into Pods. Pods may include one or more containers. All containers in a Pod run on the same node. The Pod is the smallest building block in Kubernetes. More complex and useful abstractions come on top of Pods. Services define networking rules for exposing Pods to other Pods or exposing Pods to the public Internet. Kubernetes uses deployments to manage deploying configuration changes to running Pods and also horizontal scaling. These are the fundamental terms you need to understand before we can move forward. We'll elaborate on these terms and introduce more terms as we progress through the course. I can't overstate the importance of these terms. I suggest you replay this section as many times as you need until all this information sinks in. 

Let's recap what we've learned so far. Kubernetes is a container orchestration tool. A group of nodes form a Kubernetes cluster. Kubernetes runs containers in groups called Pods. Kubernetes Services expose Pods to the cluster and the public Internet. Kubernetes deployment control rollout and rollback of Pods. 

In the next lesson we'll see how to interact with running Kubernetes clusters.

About the Author
Learning paths22

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.