Course Introduction
Overview of Kubernetes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
Course Conclusion
The course is part of these learning paths
See 6 moreKubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
Learning Objectives
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
Intended Audience
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
Prerequisites
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
Source Code
The source files used in this course are available here:
Updates
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics
Sometimes you need to perform some tasks or check some prerequisites before a main application container starts. Some the examples include waiting for a service to be created, downloading files, or dynamically deciding which port the application is going to use. The code that performs those tasks could be crammed into the main application, but it is better to keep a clean separation between the main application and supporting functionality to keep the smallest footprint you can for the images. However, the tasks are closely linked to the main application and are required to run before the main application starts.
So Kubernetes provides us with an init container as a way to run these tasks that are required to complete before our main container starts. Pods may declare any number of init containers. They run in a sequence in the order they are declared. Each init container must run to completion before the following init container begins. And once all of the init containers have completed the main containers in the pods can start.
Init containers use different images from the containers in the pod, and this can provide some benefits. They can contain utilities that are not desirable to include in the actual application image for security reasons. They can also contain utilities or custom code for setup that is not present in the application image. For example, there is no need to include utilities like sed or awk or dig in an application image if they are only used for setup.
Init containers also provide an easy way to block or delay the start-up of an application until some pre-conditions are met. They are similar to readiness probes in this sense but only run at pod startup. It can perform other useful work. All of these features together make init containers a vital part of the Kubernetes toolbox. There is one more important thing to understand about it init containers. They run every time a pod is created.
This means they will run once for every replica in a deployment. And if a pod restarts, to say, due to failed live-ness probes the init containers would run again as part of that restart. Thus, you have to assume that init containers run at least once. This usually means that init containers should be unique. Running it more than once should have no additional effect.
Let's add an init container to our app tier that will wait for Reddis before starting any application. We'll see that the init containers have the same field as regular containers in a pod spec. The one exception is init containers do not support readiness probes because they must run to completion before the state of the pod can be considered ready. You will receive an error if you try to include a readiness probe in an init container.
Let's see what the manifest looks like in our case. We'll just be updating the app to your deployment, so we won't make a new namespace. I'm comparing the deployment from the previous lesson with our new version with init containers. You can see that the fields are the same as what we have seen with regular containers. I've used the same image as the main application for simplicity, and it has everything we need in it. The command field is used to override the image's default entry point command.
For this init container, we want to run a script that waits for a successful connection with Reddis. The script is already included in the image and is executed with the NPM run script await Reddis command. This command will block until the connection is established with the configured Reddis URL provided as in an environment variable.
Now let's apply those changes to the existing deployment. After that, describe the deployments pod. And observe the event log, as it now shows the entire lifecycle with init containers. The await Reddis init container runs the completion before the server container is created. You can also view the logs of init containers using the usual logs command and specifying the name of the init container as the last argument after the pod name. This is specifically important when debugging init containers which prevents the main container from ever being created.
This concludes our tour of init containers. They give you another mechanism for controlling the lifecycle of pods. You can use them to perform some tasks before the main containers have an opportunity to start. This could be useful for checking preconditions, such as checking that depended upon services are created or preparing dependent upon files. The files use case requires knowledge of another Kubernetes concept, namely volumes which can be used to share files between containers. We'll discuss all we should know about volumes in the next lesson. So continue on when you're ready.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.