1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Deploying Kubernetes


Course Introduction
Deploying Containerized Applications to Kubernetes
12m 29s
5m 49s
9m 16s
14m 3s
The Kubernetes Ecosystem
Course Conclusion

The course is part of these learning paths

Building, Deploying, and Running Containers in Production
Introduction to Kubernetes
more_horizSee 2 more
Start course
Duration2h 12m


Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

The source files used in this course are available in the course's GitHub repository.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers


You should be familiar with:

  • Working with Docker and be comfortable using it at the command line


August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics



Related Links

Docker Desktop for Mac and Windows: https://www.docker.com/products/docker-desktop

minikube: https://github.com/kubernetes/minikube

kubeadm: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

Kubernetes in Docker (kind): https://github.com/kubernetes-sigs/kind

Amazon EKS: https://aws.amazon.com/eks/

AKS (Azure): https://azure.microsoft.com/en-ca/services/kubernetes-service/

GKE (Google Cloud): https://cloud.google.com/kubernetes-engine/

kubespray: https://github.com/kubernetes-sigs/kubespray

kops: https://github.com/kubernetes/kops

OpenShift: https://www.openshift.com/

Pivotal Container Service: https://pivotal.io/platform/pivotal-container-service

Rancher: https://rancher.com/

GKE On-Prem: https://cloud.google.com/gke-on-prem/

Azure Stack: https://azure.microsoft.com/en-ca/overview/azure-stack/


Once you've decided on Kubernetes, you have a variety of methods for deploying Kubernetes. This course focuses on core Kubernetes concepts but because it is only natural to ask how to get started using Kubernetes, this short lesson discusses some of your options for deploying Kubernetes. 

For development and test scenarios, you can run Kubernetes on a single machine. Docker for Mac and Docker for Windows both include support for running Kubernetes on a local machine in a single-node configuration. Just make sure Kubernetes is enabled in the settings. This is the easiest way to get started if you are already using Docker. Another option is to use Minikube, which supports Linux, in addition to Mac and Windows. Lastly, Linux systems can use kubeadm to set up a single-node cluster. Kubeadm is used as a building block for building Kubernetes clusters but it can effectively create a single-node cluster. Beware that kubeadm will install Kubernetes on the system itself rather than in a virtual machine, like the prior methods. 

Single-node Kubernetes clusters are also useful within continuous integration pipelines. In this use case, you want to create ephemeral clusters that start quickly and are in a pristine state for testing applications in Kubernetes each time you check in your code. Kubernetes-in-Docker, abbreviated KND or kind, is made specifically for this use case. 

For your production workloads, you want clusters with multiple nodes to take advantage of horizontal scaling and to tolerate node failures. 

To decide what solution works best for you, you need to ask several key questions, including "How much control you want over the cluster versus the amount of effort you're willing to invest in maintaining the cluster?" Fully-managed solutions free you from routine maintenance but often lag the latest Kubernetes releases by a couple of version numbers. New version of Kubernetes are released every three months. Examples of fully-managed Kubernetes as a service solutions include, Amazon Elastic Kubernetes Service or EKS, Azure Kubernetes Service or AKS, and Google Kubernetes Engine or GKE. If you preferred to have full control over your cluster, you should checkout kubespray, kops, and kubeadm. 

Do you already have an investment into an expertise with a particular cloud provider? Cloud provider's managed Kubernetes services integrate tightly with their other services in their cloud. For example, how identity and access management are performed. There will be less friction to staying close to what you already know. 

Do you need enterprise support? Several vendors offer enterprise support and additional features on top of Kubernetes. These include OpenShift by RedHat, Pivotal Container Service or PKS, or Rancher. 

Are you concerned about vendor lock-in? If you are, you should focus on open source solutions like kubespray and Rancher that can deploy Kubernetes clusters to a wide variety of platforms. 

Some other questions that are not quite as important are, "Do you want the cluster on-prem, in the cloud, or both?" Because Kubernetes provides users with an abstraction of a cluster of resources, the underlying nodes can be running in different platforms. Kubernetes itself is at the core of open source hybrid clouds or cloud running on-prem and in the cloud. Even cloud vendor Kubernetes solutions allow using on-prem compute. For example, GKE on-prem lets you run GKE on-premise. EKS allows you to add on prem nodes to the cluster, and Azure Stack allows you to run AKS on-prem. 

Do you want to run Linux containers, Windows containers, or both? To support Linux containers, you need to ensure that you have Linux nodes in your cluster. To support Windows containers, you need to ensure that you have Windows nodes in your cluster. Both Linux and Windows nodes can exist in the same cluster to support both types of containers. 

All that being said, in the context of this course, CloudAcademy has you covered for following along with the course using a real multi-node Kubernetes cluster. The Introduction to Kubernetes Playground lab provides the same cluster that I am going to use in this course. If you want to follow along without setting up your own cluster, go ahead and start the lab now. Feel free to use any other cluster if you want to run the cluster longer than the lab's time limit. 

The next lesson will cover the basics of Kubernetes Architecture. Continue on when you're ready.

About the Author
Learning paths22

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.