CloudAcademy
  1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Course Overview

The course is part of these learning paths

Certified Kubernetes Administrator (CKA) Exam Preparation
course-steps 4 certification 2 lab-steps 6
Introduction to Kubernetes
course-steps 1 certification 1 lab-steps 2

Contents

keyboard_tab
Course Introduction and Overview
Production and Course Conclusion
10
play-arrow
Start course
Overview
DifficultyAdvanced
Duration1h 28m
Students2494

Description

Introduction 
This course provides an introduction to how to use the Kubernetes service to deploy and manage containers. 

Learning Objectives 
Be able to recognize and explain the Kubernetes service 
Be able to explain and implement a Kubernetes container
Be able to orchestrate and manage Kubernetes containers 

Prerequisites
This course requires a basic understanding of cloud computing. We recommend completing the Google Cloud Fundamentals course before completing this course. 

Transcript

Updates: At 4:34 Adam mentions an advantage of Docker Swarm is its direct integration into the Docker toolchain. Since then, Docker has also provided direct integration with Kubernetes giving you the choice to deploy using Docker Swarm or Kubernetes.

Hello, and welcome back to the Introduction to Kubernetes course for Cloud Academy. I'm Adam Hawkins, and I'm your instructor for this lesson.

This lesson is a Kubernetes overview. We're covering Kubernetes features, comparing Kubernetes to other, similar tools, architecture, and, very importantly, terminology, and, also, how to interact with Kubernetes. The objective is to build strong, foundational understanding and prepare you for the following hands-on lessons. Let's jump into the good stuff. What you can do with Kubernetes.

Kubernetes is an open-source container orchestration tool designed to automate, deploying, scaling, and operating containerized applications. Kubernetes was born from Google's 15 year experience running production workloads. It is designed to grow from tens, thousands, or even millions of containers. Kubernetes is also container-runtime agnostic, which means you can actually use Kubernets to run rocket and docker containers today. Kubernetes is a distributed system. Multiple machines are configured to form a cluster. More on this term later. Machines may be a mix of physical or cloud infrastructure, each with their own unique hardware configurations. Kubernetes places containers, according to the compute resources, on the appropriate machine. You can mix critical and best-effort workloads to increase resource utilization. Kubernetes is also smart enough to move containers to different machines, as machines are added and removed. Kubernetes also provides excellent end-user abstractions. Kubernetes uses declarative configure for everything. Engineers can quickly deploy containers, wire up networking, scale, and expose applications to the world. This is the course's primary focus. We'll cover all of these features throughout the lessons. Operations staff are not left in the cold either. Kubernetes can automatically move containers from dead machines to running machines. There are built-in features for doing maintenance on a particular machine. Multiple clusters may join up to form a federation. This feature is primarily for redundancy. If one cluster dies, containers will automatically move to another cluster. We'll touch on these features more later on in the course.

Let's enumerate some specific features before comparing Kubernetes to other tools. Building clusters from a mix of physical and virtual infrastructure. Automated deployment rollout and rollback. Seamless horizontal scaling. Secret management. Service discovery and load balancing. Simple log collection. Stateful application support. Persistent volume management. CPU and memory quotas. Batch job processing. Role based access control. And, also, high availability features.

Let's compare Kubernetes with other tools, now that we know what Kubernetes can do. Container orchestration is the hottest topic in the industry. Previously, the industry was focused on pushing container adoption. Naturally, the next step is to put containers into production. There are many tools in this area. Sometimes, comparing one technology to another is the best way to understand it and also learn about others in the process. We'll compare DCOS, Amazon's ECS, and Docker swarm mode. Each has its own niche and unique strength. This section should help you understand Kubernetes' approach and decide if it fits your particular use case.

DCOS, or DataCenter OS, is similar to Kubernetes in many ways. DCOS poolls compute resources into a uniform task pool. The big difference here is that DCOS targets many different types of workloads including, but not limited to, containerized applications. This makes DCOS attractive to organizations which are not using containers for all of their applications. DCOS also includes a kind of package manager to easily deploy systems like Kafka or Spark. You can even run Kubernetes on DCOS, given its flexibility for different types of workloads.

ECS, or the Elastic Container Service, is AWS' entry into the container orchestration space. ECS allows you to create pools of EC2 instance, and uses API calls to orchestrate containers across them. It's only available inside AWS, and, generally, less feature complete compared to other open-source tools. It may be useful for those of you who are deep into the AWS ecosystem.

Docker Swarm Mode is the official orchestration tool from Docker, Inc. Docker swarm mode builds a cluster from multiple docker hosts and distributes containers across them. It shares a similar feature set with Kubernetes or DCOS with one notable exception. Docker swarm mode is the only tool to work natively with the Docker command. This means associated tools like docker-compose can target swarm mode clusters without any changes.

Here are my recommendations, when considering any of these tools. Use Kubernetes if you're only working with containerized applications that may or may not just be Docker. Use DCOS if you have a mix of container and non-containerized applications. Use ECS if you enjoy AWS products and also enjoy first party integrations. Use Docker Swarm if you want a first party solution or direct integrations with the wider Docker toolchain. These recommendations are just a starting point. I recommend you conduct your own research to understand each tool, its trade-offs, and appropriate fitness for your use case. This is all the time we'll spend discussing these other tools. Time to move forward to Kubernetes specifics.

Kubernetes, itself, is a distributed system. It introduces its own vernacular to the orchestration space. Internalizing the vernacular is critical to success with Kubernetes. You must also understand the architecture to understand how features work under the hood. We'll take a top 10 approach to these concepts. The Kubernetes cluster is the highest place to start.

Kubernetes clusters are composed of nodes. The term cluster refers to the aggregate of all of the nodes. Also, the term cluster refers to the entire running system. A node, previously called a minion, is a worker machine in Kubernetes. A node may be a VM or physical machine. Each node includes software to run containers managed by the Kubernetes control plane. The control plane is a set of APIs and software that Kubernetes users interact with. The control plane runs on master nodes.

The control plane schedules containers onto nodes. The term scheduling does not refer to time in this context. Think of it from a kernel perspective. The kernel schedules processes onto the CPU according to multiple factors. Certain processes need more or less compute or have different quality of service rules. Ultimately, the scheduler does its best to ensure that every process runs. Scheduling, in this case, refers to the decision process of placing containers onto nodes in accordance with their declared, compute requirements. Containers are grouped into pods. Pods may include one or more containers. All containers in a pod run on the same node. The pod is the lowest building block in Kubernetes. More complex and useful abstractions come on top of pods. Services define networking rules for exposing pods to other pods, or exposing pods to the public internet. Kubernetes uses deployments to manage deploying configuration changes to running pods and, also, horizontal scaling. These are the fundamental terms you need to understand before we can move forward. We'll elaborate on these terms and introduce more terms as we progress throughout the course. I cannot overstate the importance of these terms. I suggest you replay this section as many times as you need, until all this information sinks in.

We've introduced many concepts and terms. Let's reiterate them and introduce more in a glossary fashion.

Cluster: a group of nodes configured to run a functioning Kubernetes system. Nodes can be a mix of physical or virtual machines running on public or private clouds, or even in on-premise data centers. This term refers to the aggregate of all nodes and not individual nodes.

Pod: a group of one or more containers running on a single node.

Service: a networking abstraction that defines rules on how to access pods determined by a selector. Do not confuse this concept with things like back-end service or application. Kubernetes services are about networking.

Selector: a set of rules to match resources based on metadata.

Label: key/value pairs attached to objects such as pods. Labels specify identifying attributes meaningful to users, but do not imply semantics to Kubernetes. Here's an example: your environment may be set to production.

Annotations: arbitrary, non-identifying metadata for retrieval by API clients such as tools and libraries. Beta API functionality may be activated by setting specific annotations.

Deployment: a declarative template for creating and scaling pods.

Replica Set: coordinates pod creation, deletion, and updates during deployments.

Volume: a stateful block store for use with ephemeral pods. Volumes may have multiple back-ends such as on-disk, GCE, persistence disks, or other third party systems. Do not confuse this with a database. A volume is simply a place to write persisted data to.

Secret: sensitive information, such as passwords, OAuth tokens, and ssh keys.

StatefulSet: a pod with guarantees on deployment and scaling order.

Request: the desired amount of CPU or memory for a container in a pod.

Resource: any individual Kubernetes item such as deployment, pod, service, or secret etc. Going forward, you'll hear me repeating the term resource to refer to any type of Kubernetes item. It's very important to internalize this term as we move throughout the course.

Name: a unique identifier for a particular resource.

Namespace: a group of unique names. You may also hear this referred to as a virtual cluster.

Now we have a cool one, k8s. This is an abbreviation for Kubernetes. This is commonly used in documentation and naming things. You may also hear me slip it in occasionally.

Whoa, let's take a breather. That is a lot of terms. We'll need them all to complete the course. Some are more common and thus more important than the others. Please prioritize the following terms: cluster, pod, deployment, service, and resource. These terms are just words for us right now. Let's turn them into something concrete.

There are two primary ways to interact with Kubernetes. There's a web dashboard and the kubectl command. This course uses the kubectl command exclusively, so we'll focus more on that. You can check out the web dashboard after this lesson on your own time.

Your success with Kubernetes directly correlates with your kubectl skill. You can accomplish all your day-to-day work using kubectl. It's vital to learn this command because it manages all different types of Kubernetes resources, and provides debugging and introspection features. Luckily, kubectl follows an easy-to-understand design pattern. When you learn to manage one resource, you learn to manage them all.

Let's introduce some common subcommands. Kubectl create: create a new Kubernetes resource using the data specified in file. This command can create any type of Kubernetes resource. Data may be specified in Yaml or JSON. We'll use Yaml throughout the course because it supports comments. I recommend you do the same. Kubectl delete: delete a particular resource. Kubectl get: return a list of all the resources of a specified type. Example, kubectl get pods. Kubectl describe: print detailed info about a particular resource or a list of resources. Here's an example, kubectl describe pod server. Kubectl logs: print container logs for a particular pod, or a specific container inside a multi-container pod. We'll go deeper into these commands as the course progresses. This is just enough to kickstart you for the next lesson.

This concludes our overview. This was a lot of information to take in, but I promise you it will be worth it all in the end. Let's recap what we learned so far. Kubernetes is a container orchestration tool. A group of nodes form a Kubernetes cluster. Kubernetes runs containers in groups called pods. Kubernetes services expose pods to the cluster and to the public internet. Kubernetes deployments control rollout and rollback of pods. The kubectl command is the primary way to interact with Kubernetes.

I suggest you read the official documentation for more detailed information. The user guides give you a fantastic, high-level storytelling. Also, the official architecture guide is much more detailed than what we could cover in this lesson. These are fantastic, supplemental course materials. I specifically recommend the Kubernetes Glossary and the Kubernetes Architecture Guide. You can find links on this slide.

Thanks for being patient while we covered the theory. I know, if you're like me, you're probably quite keen to just get down and dirty building something. Well, that's exactly what we're gonna do in the next lesson. We'll deploy our first application to Kubernetes. See you then.

About the Author

Students4750
Courses4
Learning paths1

Adam is backend/service engineer turned deployment and infrastructure engineer. His passion is building rock solid services and equally powerful deployment pipelines. He has been working with Docker for years and leads the SRE team at Saltside. Outside of work he's a traveller, beach bum, and trance addict.

Covered Topics