The course is part of these learning paths
Overview of Kuberentes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
The source files used in this course are available in the course's GitHub repository.
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
About the Author
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.
Amazon ECS: https://aws.amazon.com/ecs
This lesson will provide a high-level overview of Kubernetes. We will cover what you can do with Kubernetes including some of the core features that have driven Kubernetes' success. We will also discuss the competitive landscape around Kubernetes.
Kubernetes, often abbreviated k8s is an open-source orchestration tool designed to automate, deploying, scaling, and operating containerized applications. Kubernetes was born out of Google's experience running workloads in production on their internal Borg cluster manager for over a decade. It is designed to grow from tens, thousands, or even millions of containers. Organizations adopting Kubernetes increase their velocity by releasing faster and recovering faster with Kubernetes self-healing mechanisms, productivity increases and cost decrease.
Kubernetes is a distributed system. Multiple machines are configured to form a cluster. Machines may be a mix of physical and virtual and may exist on-prem, and in cloud infrastructure, each with their own unique hardware configurations. Kubernetes places containers on machines using scheduling algorithms that consider available compute resources, requested resources, priority and a variety of customizable constraints. Kubernetes is also smart enough to move containers to different machines as machines are added and removed. Kubernetes is also container runtime agnostic, which means you can actually use Kubernetes with different container end times. Kubernetes most commonly uses docker containers but can also be used with rkt containers for example. This kind of adaptability is a result of Kubernetes modular design, it has also led to Kubernetes wide-spread option and made Kubernetes one of the most active open-source projects around. Kubernetes also provides excellent end-user abstractions. Kubernetes uses declarative configuration for everything. Engineers can quickly deploy containers, wire up networking, scale and expose applications to the world. We'll cover all of these features throughout the lessons. Operation staff are not left in the cold either. Kubernetes can automatically move containers from failed machines to running machines. There are also built-in features for doing maintenance on a particular machine. Multiple clusters may join up to form a federation. This feature is primarily for redundancy. If one cluster dies, containers will automatically move to another cluster.
The following Kubernetes' features also contribute to making Kubernetes a top choice for orchestrating containerized applications. Automated deployment rollout and rollback, seamless horizontal scaling, secret management, service discovery and load balancing, support for both Linux and Windows containers, simple log collection, stateful application support, persistent volume management, CPU and memory quotas, batch job processing and role-based access control.
With the popularity of containers there has been a surge in tools to support enterprises adopting containers and production. Kubernetes is just one example. Let's compare Kubernetes with other tools now that we know what Kubernetes can do. Sometimes comparing one technology to another is the best way to understand it and also learn about others in the process. We'll compare DC/OS, Amazon ECS and Docker swarm mode. Each has its own niche and unique strength. This section should help you understand Kubernetes approach and decide if it fits your particular use case.
DC/OS or Distributed Cloud Operating System is similar to Kubernetes in many ways. DC/OS pools compute resources into a uniform task pool. The big difference here is that DC/OS targets many different types of workloads, including, but not limited to containerized applications. This makes DC/OS attractive to organizations which are not using containers for all of their applications. DC/OS also includes a package manager to easily deploy systems like like Kafka and Spark. You can even run Kubernetes on DC/OS given its flexibility for different types of workloads.
Amazon ECS, or Elastic Container Service is AWS's first entry into the container orchestration space. ECS allows you to create pools of compute resources and uses API calls to orchestrate containers across them. Compute resources are EC2 instances that you can manage yourself or let AWS manage them for you using AWS Fargate. It's only available inside AWS. It may be useful for those of you who are deep into the AWS ecosystem.
Docker swarm mode is the official Docker solution for orchestrating containers across a cluster of machines. Docker swarm mode builds a cluster from multiple docker hosts and distributes containers across them. It shares a similar feature set with Kubernetes. Docker swarm mode works natively with the Docker command. This means associated tools like Docker Compose can target swarm mode clusters without any changes. Docker's enterprise edition leverages swarm mode to manage an enterprise-grade cluster. Docker also provides full support of Kubernetes if you start with swarm mode and decide later to use Kubernetes.
If you aren't already fixed on using Kubernetes I would recommend you conduct your own research and understand each tool, its trade-offs and appropriate fitness for your use case. Cloud Academy has content for each option to help you make the right decision.
The next lesson will go through some of your options for deploying Kubernetes. I'll meet you there.