Course Introduction
Overview of Kubernetes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
Course Conclusion
The course is part of these learning paths
See 6 moreKubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
Learning Objectives
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
Intended Audience
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
Prerequisites
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
Source Code
The source files used in this course are available here:
Updates
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics
This lesson will provide a high level overview of Kubernetes. We will cover what you can do with Kubernetes including some of the core features that have driven Kubernetes success. We will also discuss the competitive landscape around Kubernetes. Kubernetes, often abbreviate as K8s, is an open-source container-orchestration tool designed to automate, deploying, scaling, and the operation of containerized applications.
Kubernetes was born out of Google's experience running workloads in production on their internal Borg cluster manager for well over a decade, it is designed to grow from tens, thousands, or even millions of containers. Organizations adopting Kubernetes increased their velocity by having the ability to release faster and recover faster with Kubernetes self healing mechanisms. Kubernetes is a distributed system. Multiple machines are configured to form a cluster. Machines may be a mix of physical and virtual and they may exist on-prem or in cloud infrastructure each with their own unique hardware configurations.
Kubernetes places containers on machines using scheduling algorithms that consider available compute resources, requested resources priority, and a variety of other customizable constraints. Kubernetes is also smart enough to move containers to different machines as this machines are added or removed. Kubernetes is also container runtime agnostic which means you can actually use Kubernetes with different container runtimes.
Kubernetes most commonly uses Docker containers but can also be used with Rocket containers, for example. This kind of adaptability is a result of Kubernetes modular design. It also has a lead to Kubernetes widespread adoption and made Kubernetes one of the most active open source projects around Kubernetes also provides excellent end user abstractions by using declarative configuration for everything. Engineers can quickly deploy containers, wire up networking, scale and expose the applications to the real world. We'll cover all of these features throughout the lesson.
Operation staff are not left in the dark either. Kubernetes can automatically move containers from failed machines to running machines. There are also built-in features for doing maintenance on a particular machine. Multiple clusters can also join up with each other to form a Federation. This feature is primarily for redundancy, such that, if one cluster dies, containers will automatically move to another cluster.
The following features also contribute to making Kubernetes a top choice for orchestrating containerized applications: the automation of deployment rollout and rollback, seamless horizontal scaling, secret management, service discovery and load balancing, support for both Linux and Windows containers, simple log collection, stateful application support, persistent volume management, CPU and memory quotas batch job processing, and role-based access control. With the popularity of containers, there's been a surge in tools to support enterprises adopting containers in production. Kubernetes is just one example.
So let's compare Kubernetes with some other tools because now that we know what Kubernetes can do it's sometimes useful when we can compare one technology to another. We'll compare DCOS, Amazon ECS, and Docker Swarm Mode, each has their own niche and unique strength. This section will help you understand Kubernetes approach and decide if it fits your particular use cases.
DCOS or Distributed Cloud Operating System is similar to Kubernetes in many ways DCOS pools compute resources into a uniform task pool, but the big difference here is that DCOS targets many different types of workloads including, but not limited to, containerized applications. This makes DCOS attractive to organizations which are not using containers for all of their applications. DCOS also includes a Package Manager to easily deploy it to his systems like, Kafka or Spark. You can even run Kubernetes on DCOS given its flexibility for different types of workloads.
Amazon ECS, or the Elastic Container Service is AWS' ability to orchestrate containers. ECS allows you to create pools of compute resources and uses API calls to orchestrate containers across them. Compute resources are EC2 instances that you can manage yourself or let AWS manage them with AWS Fargate. It's only available inside of AWS and generally, less feature compared to other open source tools. So it may be useful for those of you who are deep into the AWS ecosystem.
Lastly, Docker Swarm Mode is the official Docker solution for orchestrating containers across a cluster of machines. Docker Swarm Mode builds a cluster from multiple Docker hosts and distributes containers across them. It shows a similar feature set with Kubernetes or DCOS. Docker Swarm Mode works natively with the docker command. This means that associated tools like Docker Compose can target Swarm Mode clusters without any changes.
Docker Enterprise Edition leverages Swarm Mode to manage an enterprise-grade cluster. And Docker also provides full support for Kubernetes if you want to start out with Swarm and later swap over to Kubernetes. So if you're not already fixed on you using Kubernetes I would recommend that you conduct your own research to understand each tool and its trade-offs. Cloud Academy has content for each option to help you make the right decision.
In the next lesson, we'll go through some of our options for deploying to Kubernetes. So I'll see you there.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.