CloudAcademy
  1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Probes and Init Containers

The course is part of these learning paths

Certified Kubernetes Administrator (CKA) Exam Preparation
course-steps 4 certification 2 lab-steps 6
Introduction to Kubernetes
course-steps 1 certification 1 lab-steps 2

Contents

keyboard_tab
Course Introduction and Overview
Production and Course Conclusion
10
play-arrow
Start course
Overview
DifficultyAdvanced
Duration1h 28m
Students2494

Description

Introduction 
This course provides an introduction to how to use the Kubernetes service to deploy and manage containers. 

Learning Objectives 
Be able to recognize and explain the Kubernetes service 
Be able to explain and implement a Kubernetes container
Be able to orchestrate and manage Kubernetes containers 

Prerequisites
This course requires a basic understanding of cloud computing. We recommend completing the Google Cloud Fundamentals course before completing this course. 

Transcript

Updates: At 8:36 Adam uses and annotation to implement init containers. This was required in Kubernetes 1.5, but since Kubernetes 1.6 a proper initContainers list can be included in a pod manifest as spec.initContainers list and is now the preferred method. You can specify initContainers with the same fields you use to specify containers in a pod spec except initContainers must run to completion so they do not support readiness probes.

Hello and welcome back to the Introduction to Kubernetes course from CloudAcademy. I'm Adam Hawkins and I'm your instructor for this lesson.

The previous lesson covered deployments and ruleout management. This lesson covers Liveness Probes, Readiness Probes, and Init Containers. Our objective is to test our containers using these new features. Specifically we will add Liveness Probes to our application, add Readiness Probes to the application and also verify preconditions with init containers. We are reusing the sample code from the last lesson. I suggest you to play the previous lesson if you need to get up to speed. Alright let's get going.

Kubernetes uses a probe to test running containers. These are similar to health checks. You've probably heard the term from working with load balancers. Let's consider an HTTP load balancer. The load balancer makes an HTTP request to each server to check if they are ready to serve requests. Servers that don't respond are removed from the load balancer. Kubernetes probes are similar but vary in their execution. Kubernetes uses two types of probes. Readiness and liveliness. Probes are either HTTP requests, TCP socket checks or custom commands executed inside the container. Containers that run for longer periods of time eventually transition to broken states. They cannot recover except by restarting them. Kubernetes liveness probes detect and remedy this situation. Other times, containers are only temporarily unavailable and they all recover on their own. They can be removed from incoming traffic until things are working again instead of killing them. Kubernetes provides readiness probes to detect and mitigate these situations. Readiness and liveness probes are configured in the same way. There is this subtle difference between these two concepts. You may have only worked with one of these before.

Let's recap liveness and readiness probes. Liveness probes are used to automatically restart broken containers. Readiness probes are used to automatically add and remove containers from service load balancers. I know this concept can be a bit tricky. Please rewatch this section until you can catch the difference. So feeling confident? Let's add some probes to our application.

That may have been hard to understand in the abstract. So let's crystallize the concepts by relating these probes to our application. The data contour containers one Redis container. This container is alive if it accepts TCP connections. The Redis container is ready if it responds to Redis commands such as get or ping. Do you see the difference? A serve maybe alive but not necessarily ready to handle incoming requests. The API server is alive if it accepts HTTP requests but the API server is only ready if it is online and has a connection to Redis to serve and manipulate the counter. The sample application has a path for each of these probes. The counter and poller containers are live and ready if they can make HTTP requests back to the API server. The probes are more important for the counter since it has changed in the application state. However, the poller is less important because it's just really printing the value. Anyways let's apply this knowledge to the deployment templates. We will go in the same order we just discussed but will skip the counter and poller because the server demonstrates the same functionality.

Start by creating a new namespace for this lesson. Create a new lesson 203 namespace like we've done in the past.

Now open up the previous data-tier-deployment.yml file from the previous lesson. We will start by adding the liveness probe. The liveness probe uses the TCP socket mode in this example. The port value is the same as the container port. Also set the initial delay seconds to give the Redis server an adequate time to start. We can also configure failure thresholds, delays and timeouts for all probes. The default value work well now for this example. Again refer to the Kubernetes documentation for the complete information. Next the readiness probe uses the exec mode. This runs the command inside the container similar to docker exec if you've used that before. The redis-cli ping command tests if the server is up and ready to actually process Redis specific commands. Commands are specified as string arrays. Also set the initial delay seconds. It is best practice to start the readiness probes after the liveness probes. That way the container has an adequate time to start and the liveness probes don't kill the container before the readiness probes has time to function.

Now I need to create the service and deployment. You can reuse the same service file from the last lesson. There's no changes required there. Then create the deployment using the new file.

Now you can watch the data through deployment for any probes. The watch command is especially handy for this case. Note the available column. This value should show one. Watch the deployment for a while to make sure things stay running. Use a combination of the scribe and logs to debug any failure you see. Eventually your deployment should show one in the available column. Remember that it may take some time to pull the image if they've never been used before. You can use describe to see what went wrong if the ready column changes. You may think to describe the deployment to see all of the probes in the event log. Unfortunately it seems that K8s only shows the count of probes in the describe output. We will add some debug logging to the server so that you can see all the incoming probe requests in the next step.

Open app-tier-deployment.yml from the previous lesson. Sample application defines two probe requests. The liveness probe does not actually communicate with Redis. It just returns a 200 okay for all requests. The readiness probe pings the configured readiness URL. Both probes use HTTP. We will set the path and port for both. Also set the initial delay seconds so the process has adequate time to start. Again there are more available options. Refer to the documentation for the complete information. We will also set the debug environment variable to show all logs coming from the servers request framework. Note that this environment variable is specific to the sample application and not a general purpose setting. Once you're done, save the file.

Again create the servers using the file from the previous lesson and then create the new deployment. Watch the deployment like before to verify containers are alive and ready. Remember that it may take some time to pull images and start the containers. Also consider your initial delay seconds. Watch the deployment until there are some ready containers. Again use a combination of describe and logs to troubleshoot any failed containers.

Now let's stream some logs to see what's happening behind the scenes. First get the pods to find a pod in the deployment. Then use kubectl to get the logs. We can see that Kubernetes is firing all the probes. Probes ensure the containers behave correctly. Specifically in our case that the app-tier only succeeds once new servers process, boot and are ready. Temporarily flaky containers will be removed from service until the readiness probe passes. So remember that probes monitor containers after they start. So what if we need to test or prepare things before they start? This is a job for init containers.

Pods may declare any number of init containers. They run in a sequence before starting other containers in the pod. A pod is only ready once all init containers complete successfully. Init containers use different images from the containers in a pod. This provides some benefits. They can contain utilities that are not desirable to include in the actual application image for security reasons. They can also contain utilities or custom code for setup that is not present in the application image. For example there is no need to make an image just to use a tool like Set Aug Python dig or etc during setup. They also provide an easy way to block or delay the startup of an application until some preconditions are met. Init containers have full access to the sequence's environment variables as well. All these features together make init containers a vital part of the Kubernetes tool box.

There is one important thing to understand about init containers. They run every time a pod is created. This means they will run once for every replica in a deployment. Thus you have to consider that init containers must assume that they are run at least once. I repeat, init containers are always run in at least once mode. Let's continue adding a new init container to our app tier that will wait for Redis before starting any application servers.

Reopen app-tier-deployment.yml with your editor. Init containers were added as annotations. This is our first use of annotations in the course. Remember that annotations can be used by other parts of Kubernetes to add extra features. Annotations are structured data. It is a bit unfortunate but annotation data is stored in JSON. Note the single quotes in this example. This ensures that data is treated as a literal JSON string and not confused with structured YML data. Init containers are configured in the same way as other containers. The NPM run or wait redis command is included in the example microservice application image. This command will block what the timeout until the connection is established with the configured redis url environment environment variable.

Now save the file and apply changes to the existing deployment. After that, describe the pod. The event log now shows the entire lifecycle with init containers. Note that the new await redis container is shown in the event log. You can also use the logs command to retrieve logs for a given init container. This is specifically important when debugging why they failed.

This concludes our tour of probes and init containers. Together they give you full control over the pod lifecycle. Here is the summary of what we covered. Probes monitor running containers. Failed liveness probes restart dead containers. Failed readiness probes take containers out of service. Init containers test pre conditions for pods. I suggest you do some experiments to see how pods behave and probes fail. Also when init containers fail. Please refer to the official documentation for other things you can configure with probes and init containers.

The remaining lessons cover some other useful Kubernetes features before moving on to production preparedness. We've focused on stateless applications up to this point. The next lesson introduces persistent file storage. See you there.

About the Author

Students4750
Courses4
Learning paths1

Adam is backend/service engineer turned deployment and infrastructure engineer. His passion is building rock solid services and equally powerful deployment pipelines. He has been working with Docker for years and leads the SRE team at Saltside. Outside of work he's a traveller, beach bum, and trance addict.

Covered Topics