1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Kubernetes Architecture

Start course
Overview
Difficulty
Beginner
Duration
2h 30m
Students
15290
Ratings
4.4/5
starstarstarstarstar-half
Description

Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers

Prerequisites

You should be familiar with:

  • Working with Docker and be comfortable using it at the command line

Source Code

The source files used in this course are available here:

Updates

August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics

May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics

 

Transcript

This lesson will cover Kubernetes Architecture. What we cover here will be enough to understand and reason about topics we'll learn later in this course. It is intended to build a strong foundation rather than to be an exhaustive review. Kubernetes itself is a distributed system. It introduces its own dialect to the orchestration space. Internalizing the vernacular is an important part of success with Kubernetes. And we will define several terms as they arise but know that there is also a Kubernetes glossary available in the introduction to Kubernetes learning path such that you have a single point of reference for the terms you need to know and more comprehensive glossary maintained by Kubernetes is also linked from there.

You must also understand the architecture to have a basic understanding of how features work under the hood. The Kubernetes cluster is the highest level of abstraction to start with. Kubernetes clusters are composed of nodes and the term cluster refers to all of the machines collectively and can be thought of as the entire running system.

The machines in the cluster are referred to as nodes. A node may be a VM or a physical machine. Nodes are categorized as worker nodes or master nodes. Each worker node include software to run containers managed by Kubernetes control plane and the control plane runs on master nodes. The control plane is a set of APIs and software that Kubernetes users interact with. These APIs and software are collectively referred to as master components.

The control plane schedules containers onto nodes. So the term scheduling does not actually refer to time in this context. Think of it from a Kernel perspective the Kernel schedules processes onto CPU's according to multiple factors. Certain processes need more or less compute or may have different quality of service rules. Ultimately the scheduler does its best to ensure that every container runs. Scheduling in this case refers to the decision process of placing containers onto nodes in accordance with their declared compute requirements.

In Kubernetes containers are grouped into Pods. Pods may include one or more containers. All containers in a Pod run on the same node. And the Pod is actually the smallest building block in Kubernetes. More complex and useful abstractions sit on top of Pods. Services, define networking rules for exposing Pods to other Pods or exposing Pods to the internet. And Kubernetes also uses deployment to manage the deployment configuration and changes to running Pods as well as horizontal scaling. These are fundamental terms you need to understand before we can move forward.

We'll elaborate on these terms and introduce more terms as we progress throughout the course but I cannot overstate the importance of them. I suggest you replay this section as many times as you need until all of this information sinks in. Lets recap about what we've learned so far. Kubernetes is an orchestration tool. A group of nodes form a Kubernetes cluster. Kubernetes runs containers in groups called Pods. Kubernetes services expose Pods to the cluster as well as to the public internet. And Kubernetes deployments control rollout and rollback of Pods.

In the next lesson, we're going to be seeing how to interact with Kubernetes clusters.

About the Author
Avatar
Jonathan Lewey
DevOps Content Creator
Students
18401
Courses
8
Learning Paths
3

Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.