Overview of Kuberentes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
The course is part of these learning pathsSee 2 more
Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
The source files used in this course are available in the course's GitHub repository.
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
Sometimes you need to perform some tasks or check some prerequisites before a main application container starts. Some examples include waiting for a service to be created, downloading files the application depends on, or dynamically deciding which port the application should use. The code that performs those tasks could be crammed into the main application image but it is better to keep a clean separation between the main application and supporting functionality to keep the smallest footprint you can for the images. However, the tasks are closely linked to the main application and are required to run before the main application starts. Kubernetes provides init containers as a way to run these tasks that are required to complete before the main container starts.
Pods may declare any number of init containers. They run in a sequence in the order they are declared. Each init container must run to completion before the following init container begins. Once all of the init containers have completed the main containers in the pod can start. Init containers use different images from the containers in a pod. This provides some benefits. They can contain utilities that are not desirable to include in the actual application image for security reasons. They can also contain utilities or custom code for setup that is not present in the application image. For example there is no need to include utilities like Sed Awk or dig in an application image if they are only used for setup. Init containers also provide an easy way to block or delay the startup of an application until some preconditions are met. They are similar to readiness probes in this sense but only run at pod startup and can perform other useful work. All these features together make init containers a vital part of the Kubernetes tool box.
There is one important thing to understand about init containers. They run every time a pod is created. This means they will run once for every replica in a deployment. If a pod restarts, say due to a failed liveness probe, the init containers would run again as part of the restart. Thus you have to assume that init containers run at least once. This usually means init containers should be idempotent; meaning running it more than once has no additional effect.
Let's add an init container to our app tier that will wait for Redis before starting any application servers. We’ll see that initContainers have the same fields as regular containers in a pod spec. The one exception is initContainers do not support readiness probes because they must run to completion before the state of the pod can be considered ready. You will receive an error if you try to include a readiness probe in an initContainer. Let’s see what the manifest looks like in our case.
We’ll just be updating the app tier deployment so we won’t make a new namespace. I’m comparing the deployment from the previous lesson with our new version with init containers. You can see that the fields are the same as what we have seen with regular containers. I’ve used the same image as the main application for simplicity and it has everything we need in it already. The command field is used to override the image’s default entrypoint command. For this init container we want to run a script that waits for a successful connection with redis. The script is already included in the image and is executed with the NPM run-script await redis command. This command will block until a connection is established with the configured redis url provided as an environment variable.
Now let’s apply the changes to the existing deployment
kubectl apply -f 8.1.yaml -n probes
After that, describe the deployment’s pod
kubectl describe pod -n probes app-tier...
And observer the event log now shows the entire lifecycle with init containers. The await-redis init container runs to completion before the server container is created. You can also view the logs of init containers using the usual logs command and specifying the name of the init container as the last argument after the pod name
kubectl logs -n probes app-tier-... await-redis
to retrieve logs for a given init container. This is specifically important when debugging an init container failure which prevents the main containers from ever being created.
This concludes our tour of init containers. They give you another mechanism for controlling the lifecycle of pods. You can use them to perform some tasks before the main containers have an opportunity to start. This can be useful for checking preconditions such checking that depended upon services are created or preparing depended upon files.
The files use case requires knowledge of another Kubernetes concept to pull of. Namely volumes which can be used to share files between containers. We’ll discuss all about volumes in the next lesson. Continue on when you are ready.
About the Author
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.