1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Introduction to Kubernetes

Multi-Container Pods

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW4m 6s
Deploying Containerized Applications to Kubernetes
6
Pods
11m 34s
7
Services
5m 10s
10
11
13
Probes
8m 26s
15
Volumes
11m 42s
The Kubernetes Ecosystem
Course Conclusion
18

The course is part of these learning paths

Certified Kubernetes Administrator (CKA) Exam Preparation
course-steps 4 certification 2 lab-steps 6
Introduction to Kubernetes
course-steps 1 certification 1 lab-steps 3 description 1
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 58m
Students3780
Ratings
4.4/5
star star star star star-half

Description

Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

The source files used in this course are available in the course's GitHub repository.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers

Prerequisites

You should be familiar with:

  • Working with Docker and be comfortable using it at the command line

Updates

August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics

 

Transcript

This lesson continues to expand upon what we’ve already learned about pods. Specifically we’ll explore the details of working with multi-container pods. This is where we really start to hit the good stuff. As we learn about multi-container pods we will also cover more about namespaces, and pod logs.

 

We're using a sample application for this lesson. It's a simple application that increments and prints a counter. It is split into four containers across three tiers. The application tier includes the server container that is a simple node.js application. It accepts a post request to increment a counter and a get request to retrieve the current value of the counter. The counter is stored in redis which comprises the data tier. The support tier includes a poller and a counter. The poller container continually makes a get request back to the server and prints the value. The counter continually makes a post request to the server with random values. All the containers use environment variables for configuration. Also, the Docker images are public so we can reuse them for this exercise.

Let's walk through modeling the application as a multi-container pods. 

 

We'll start by creating a namespace for this lesson. Remember that a namespace separates different Kubernetes resources. Namespaces may be used to isolate users, environments or applications. You can also use Kubernetes role-based authentication to manage user’s access rights to a resource in a given namespace. Using namespaces is a best practice. Let's start using namespaces now and we'll continue throughout the remainder of the course. They're created just like any other Kubernetes resource.

 

Here is our namespace manifest. Namespaces don’t require a spec, the main part is the name which is set to microservices and it's a good idea to label it as well. Everything in this namespace will relate to the counter microservices app. Let’s create the namespace in kubectl

kubectl create -f 3.1

Future kubectl commands need to use a dash dash namespace or -n option to specify the namespace otherwise the default namespace is used. You could also use the kubectl create namespace command but for this course we’ll stick with always using manifests.

Now onto the pod. I've named the pod app. 

 

Off the top I want to mention that you can specify a namespace in the metadata. But that makes the manifest slightly less portable because the namespace can’t be overridden at the command line. 

 

Moving down to the redis container. We'll use the latest official redis image. The latest version is chosen to illustrate a point. When you use the latest tag Kubernetes will always pull the image whenever the pod is started. This can introduce bugs if a pod restarts and pulls a new latest version without you realizing it. To prevent always pulling the image and using an existing version if on exists, you can set the imagepullpolicy field to ifnotpresent. It’s useful to know this but in most situations you are better off specifying a specific tag rather than latest. When specific tags are used the default image pull behavior is ifNotPresent. The standard redis Port of 6379 is published.

 

Now on to the server container. The server container is straightforward. The image is the public image from the sample application. The tag is used to indicate the microservice within the microservices repository. The server runs on port 8080 so it's exposed. The server also requires a redis URL environment variable to connect to the data tier. We can set this in the env sequence? How does the server container know where to find redis? Well, because containers in a pod share the same network stack, a result of which is they all share the same IP address, they can  reach other containers in the pod on local host at their declared container port. The correct host port for this example is local host 6379. Image pull policy is omitted because Kubernetes uses if not present when the explicit tag is given. We can use the same approach for the counter and poller containers. These containers require the API URL environment variable to reach the server in the application tier. The correct host port combo for this example is local host 8080.

Now let’s create the pod this time adding the -n option to set the namespace the pod will be created in to our microservice namespace

kubectl create -f 3.2.yaml -n microservice

Remember to include the same namespace option with all kubectl commands related to the pod, otherwise you will be targeting the default namespace. 

Get the pod by entering

kubectl get -n microservice pod app

The -n namespace option can be included anywhere after kubectl it doesn’t have to be after get, it could be before or after. When you have tab completion enabled it makes sense to put it earlier to get the completions for your target namespace. Observe the output shows a slash 4 under status since we have 4 containers in the pod. The status also summarized what is going on but it's best to describe the pod to see what's going on in detail. 

kubectl describe -n microservice pod app

You'll see the event log has more going on now that there are multiple containers. The same events are being triggered for each container from pulling to starting, as was the case for a single container pod. If something goes wrong you should check the event log to see what's happening behind the scenes to debug any issue. Everything looks OK for us though.

 

Once the containers are running we can look at the container logs to see what they are doing. Logs are simply anything that is written to standard output or standard error in the container. The containers need to write messages to standard output or error otherwise nothing will appear in the logs. The containers in this example all follow that best practice so we can see what they are doing. Kubernetes records the logs and they can be viewed via the logs command. The kubectl log command retrieves logs for a specific container in a given pod. It dumps all of the logs by default or you can use the tail option to limit the number of logs presented. Let’s see the 10 most recent logs for the counter container in the app pod

kubectl logs -n microservice app counter --tail 10

Here we can see the counter is incrementing the count by random numbers between 1 and ten. 

Let's check the value of the count by inspecting the logs for the poller container. This time we’ll use -f to stream logs in real time 

kubectl logs -n microservice app poller -f

We can see the count is increasing every second as the counter continues to increment it. That confirms it. Our first multi-container application is up and running. Press ctrl+c to stop following the logs.

 

In this lesson We created a multi-container pod that implements a three tier application. We used the fact that containers in the same pod can communicate with one another using localhost.

We also saw how to get logs from containers running in kubernetes by using the kubectl logs command. Remember that logs work by recording what the container writes to standard output and standard error. The logs allowed us to confirm the application is working as expected by continuously incrementing the count.

But there are some issues with the current implementation. Because pods are the smallest unit of work Kubernetes can only scale out by increasing the number of pods and not the containers inside the pod. If we want to scale out the application tier with the current design, we’d have to also scale out  all the other containers proportionately. That also means that there would be multiple redis containers running and each would have their own copy of the counter. That's certainly not what we're going for. It is a much better approach to be able to scale each service independently. Breaking the application out into multiple pods and connecting them with services is a better implementation. We’ll walk through design in the next lesson. But before moving on it is worth noting that sometimes you do want each container in a pod to scale proportionately. It comes down to how tightly coupled the containers are and if it makes sense to think of them as a single unit.

With that point out of the way, I’ll see you in the next lesson where we will leverage services to break our tightly coupled pod design into multiple independent pods.

About the Author

Students38476
Labs101
Courses11
Learning paths9

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.

Covered Topics