I recently completed part 1 of my webinar series on Kubernetes. The session introduced Kubernetes at a high level with hands-on demos aiming to answer the question ‘what is Kubernetes? You can watch the session and read the complete Q&A on the Cloud Academy community forum. After polling our audience, we found that most of the webinar attendees had never used Kubernetes before, or had only been exposed to it through demos. This post is meant to complement the session with more introductory-level information about Kubernetes.

Kubernetes is an open source container orchestration tool designed to automate deploying, scaling, and operating containerized applications. Kubernetes was born from Google’s 15-year experience running production workloads. It is designed to grow from tens, thousands, or even millions of containers. Kubernetes is container runtime agnostic. You actually use Kubernetes to manage Rocket containers today.

Kubernetes’ features provide everything you need to deploy containerized applications. Here are the highlights:

  • Container Deployments & Rollout Control. Describe your containers and how many you want with a “Deployment.” Kubernetes will keep those containers running and handle deploying changes (such as updating the image or changing environment variables) with a “rollout.” You can pause, resume, and rollback changes as you like.
  • Resource Bin Packing. You can declare minimum and maximum compute resources (CPU & Memory) for your containers. Kubernetes will slot your containers into where ever they fit. This increases your compute efficiency and ultimately lowers costs.
  • Built-in Service Discovery & Autoscaling. Kubernetes can automatically expose your containers to the internet or other containers in the cluster. It automatically load-balances traffic across matching containers. Kubernetes supports service discovery via environment variables and DNS, out of the box. You can also configure CPU-based autoscaling for containers for increased resource utilization.
  • Heterogeneous Clusters. Kubernetes runs anywhere. You can build your Kubernetes cluster for a mix of virtual machines (VMs) running the cloud, on-prem, or bare metal in your datacenter. Simply choose the composition according to your requirements.
  • Persistent Storage. Kubernetes includes support for persistent storage connected to stateless application containers. There is support for Amazon Web Services EBS, Google Cloud Platform persistent disks, and many, many more.
  • High Availability Features. Kubernetes is planet scale. This requires special attention to high availability features such as multi-master or cluster federation. Cluster federation allows linking clusters together so that if one cluster goes down containers can automatically move to another cluster.

These key features make Kubernetes well suited for running different application architectures from monolithic web applications, to highly distributed microservice applications, and even batch driven applications.

Kubernetes vs. Other Tools

Container orchestration is the hottest topic in the industry. Initially, the industry focused on pushing container adoption. The next step is put containers in production at scale. There are many tools in this area. To learn about some of the other tools in this space, we will explore a few of them by comparing their features to Kubernetes.

They key players here are Apache Mesos / DCOS, Amazon’s ECS, and Docker’s Swarm Mode. Each has its own niche and unique strengths.

DCOS (or DataCenter OS) is similar to Kubernetes in many ways. DCOS pools compute resources into a uniform task pool. The big difference is that DCOS targets many different types of workloads, including but not limited to, containerized applications. This makes DCOS attractive for organizations that are not using containers for all oftheir applications. DCOS also includes a kind of package manager to easily deploy systems like Kafka or Spark. You can even run Kubernetes on DCOS given its flexibility for different types of workloads.

ECS is AWS’s entry in container orchestration. ECS allows you create pools of EC2 instances and uses API calls to orchestrate containers across them. It’s only available inside AWS and is less feature complete compared to open source solutions. It may be useful for those deep into the AWS ecosystem.

Docker’s Swarm Mode is the official orchestration tool from Docker Inc. Swarm Mode builds a cluster from multiple Docker hosts. It offers similar features compared to Kubernetes or DCOS with one notable exception. Swarm Mode is the only tool to work natively with the docker  command. This means that associated tools like docker-compose  can target Swarm Mode clusters without any changes.

Here are my general recommendations:

  • Use Kubernetes if you’re only working with containerized applications that may or may not be only Docker.
  • If you have a mix of container and non-containerized applications, use DCOS.
  • Use ECS if you enjoy AWS products and first party integrations.
  • If you want a first party solution or direct integration with the Docker toolchain, use Docker Swarm.

Now you have some context and understanding of what Kubernetes can do for you. The demo in the webinar covered the key features. Today, I’ll be able to cover some of the details that we didn’t have time for in the webinar session. We will start by introducing some Kubernetes vocabulary and architecture.

What is Kubernetes? Terminology & Architecture

Kubernetes is a distributed system. It introduces its own vernacular to the orchestration space. Therefore, understanding the vernacular and architecture is crucial.

Kubernetes “clusters” are composed of “nodes.” The term “cluster” refers to “nodes” in the aggregate. “Cluster” refers to the entire running system. A “node” is a worker machine Kubernetes, (previously known as “minion”). A “node” may be a VM or physical machine. Each node has software configured to run containers managed by Kubernetes’ control plane. The control plane is the set of APIs and software (such as kubectl) that Kubernetes users interact with. The control plane services run on master nodes. Clusters may have multiple masters for high availability scenarios.

The control plane schedules containers onto “nodes.” In this context, the term “scheduling” does not refer to time. Think of it from a kernel perspective: The kernel “schedules” processes onto the CPU according to many factors. Certain processes need more or less compute, or have different quality-of-service rules. The “scheduler” does its best to ensure that every process gets CPU time. In this case, “scheduling” means deciding where to run containers according to factors like per-node hardware constraints and the requested CPU/Memory.

Containers are grouped into “pods.” Pods may include one or more containers. All containers in a pod run on the same node. The “pod” is the lowest building block in Kubernetes. More complex (and useful) abstractions come on top of “pods.”

“Services” define networking rules for exposing pods to other pods or exposing pods to the public internet. Kubernetes uses “deployments” to manage deploying configuration changes to running pods and horizontal scaling. A “deployment” is a template for creating pods. “Deployments” are scaled horizontally by creating more “replica” pods from the template. Changes to the “deployment” template trigger a rollout. Kubernetes uses rolling deploys to apply changes to all running pods in a deployment.

Kubernetes provides two ways to interact with the control plane. The kubectl command is the primary way to do anything with Kubernetes. There is also a web UI with basic functionality.

Most of these terms were introduced in some way during the webinar. I suggest reading the Kubernetes glossary for more information.

What is Kubernetes? Demo Walkthrough

In the webinar demo, we showed how to deploy a sample application. The sample application is a boiled down micro-service. It includes enough to demonstrate features that real applications require.

There wasn’t enough time during the session to include everything I planned, so here is an outline for what did make it into the demo:

  • Interacting with Kubernetes with kubectl
  • Creating “namespaces”
  • Creating “deployments”
  • Connecting “pods” with “services”
  • Service discovery via environment variables
  • Horizontally scaling with replicas
  • Triggering a new rollout
  • Pausing and resuming a rollout
  • Accessing container logs
  • Configuring probes

I suggest keeping this post handy while watching the webinar for greater insight into the demo.

The “server” container is a simple Node.js application. It accepts a POST request to increment a counter and a GET request to retrieve the counter. The counter is stored in redis. The “poller” container continually makes the GET request to the “server” to print the counter’s value. The “counter” container starts a loop and makes a POST request to the server to increment the counter with random values.

I used Google Container Engine for the demo. You can follow along with Minikube if you like. All you need is a running Kubernetes cluster and access to the kubectl command.

First, I created a Kubernetes “namespace” to hold all the different Kubernetes resources for the demo. While it is not strictly required in this case, I opted for this because it demonstrates how to create a namespace and using namespaces is a general best practice.

I created a “deployment” for redis with one replica. There should only be one redis container running. Running multiple replicas, thus multiple databases, would create multiple sources of truth. This is a stateful data tier. It does not scale horizontally. Then, a data tier service was created. The data tier service matches containers in the data pod and exposes them via an internal IP and port.

The same process repeats for the app tier. A Kubernetes deployment describes the server container. The redis location is specified via an environment variable. Kubernetes sets environment variables for each service on all containers in the same namespace. The server uses REDIS_URL to specify the host, port, and other information. Kubernetes supports environment variable interpolation with $() syntax. The demo shows that composing application-specific environment variable names from Kubernetes provides environment variables. An app tier service is created as well.

Next comes the support tier. The support tier includes the counter and poller. Another deployment is created for this tier. Both containers find the server container via the API_URL environment variable. This value is composed of the app tier service host and port.

At this point, we have a running application. We can access logs via the kubectl logs command, and we can scale the application up and down. The demo configures both types of Kubernetes probes (aka “health checks”). The liveness probe tests that the server accepts HTTP requests. The readiness probe tests that the server is up and has a connection to redis and is thus “ready” to serve API requests.

What is Kubernetes Part 2 – Stay tuned

Part 1 of the series focused on answering the question ‘What is Kubernetes?‘ and introducing core concepts in a hands-on demo. My plan for part 2 is to focus on production preparedness and other tools required in a production environment. Please contact me if you’d like to see something different in part 2 (more demos of these key features, or anything else). Part 2 is planned for sometime in February. Stay tuned for the announcement. I hope to see you there!