1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Introduction to Kubernetes

Probes

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW4m 6s
Deploying Containerized Applications to Kubernetes
6
Pods
11m 34s
7
Services
5m 10s
10
11
13
Probes
8m 26s
15
Volumes
11m 42s
The Kubernetes Ecosystem
Course Conclusion
18

The course is part of these learning paths

Certified Kubernetes Administrator (CKA) Exam Preparation
course-steps 4 certification 2 lab-steps 6
Introduction to Kubernetes
course-steps 1 certification 1 lab-steps 3 description 1
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 58m
Students3780
Ratings
4.4/5
star star star star star-half

Description

Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

The source files used in this course are available in the course's GitHub repository.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers

Prerequisites

You should be familiar with:

  • Working with Docker and be comfortable using it at the command line

Updates

August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics

 

Transcript

The previous lesson covered deployment rollouts. Kubernetes assumed that a pod was ready as soon as the container started. That isn’t always true, for example if the container needs time to warm up Kubernetes should wait before sending any traffic to the new pod. It’s also possible that a pod is fully operational but after some time it becomes non-responsive, for example if it enters a deadlock state. Kubernetes shouldn’t send any more requests to the pod and would be better off to restart a new pod. Kubernetes provides probes to remedy both of these situations. Probes are sometimes referred to as health checks.

 

The first type of probe is a readiness probe. They are used to probe when a pod is ready to serve traffic. As I mentioned before often a pod is not ready after its containers have just started. They may need time to warm caches or load configurations. Readiness probes can monitor the containers until they are ready to serve traffic. But readiness probes are also useful long after startup. For example, if the pod depends on an external service and that service goes down, it’s not worth sending traffic to the pod since it can’t complete it until the external service is back online. Readiness probes control the ready condition of a pod. If a readiness probe succeeds the ready condition is true, otherwise it is false. Services use the ready condition to determine if pods should be sent traffic. In this way probes integrate with services to ensure that traffic doesn’t flow to pods that aren’t ready for it. This is a familiar concept if you have used cloud load balancer. Backend instances that fail health checks are not served traffic just as services won’t serve traffic to pods that aren’t ready. Services are our load balancers in kubernetes.

 

The second type of probe is called a liveness probe. They are used to detect when a pod has entered a broken state and can no longer serve traffic. In this case, Kubernetes will restart the pod for you. That is the key difference between the two types of probes. Readiness probes determine when a service can send traffic to a pod because it is temporarily not ready and liveness probes decide when a pod should be restarted because it won’t come back to life. You declare both probes in the same way, you just have to decide which course of action is appropriate if a probe fails: stop serving traffic or restart.

 

Probes can be declared on containers in a pod. All of a pods containers probes must pass for the pod to pass. You can define any of the following as the action a probe performs to check the container:

 a command that runs inside the container

An HTTP GET request

Or opening a TCP socket

 

A command probe succeeds if the exit code of the command is 0, otherwise it fails.

An HTTP GET request probe succeeds if the response status code is between 200 and 399 inclusive.

A tcp socket probe succeeds if a connection can be established. 

By default the probes check the pods every 10 seconds.

 

Our objective in the hands-on part of this lesson is to test our containers using probes. Specifically we will add Readiness and Liveness Probes to our application. We will use the application manifests from the deployments lesson as the base of our work in this lesson.

 

Before we start creating probes let's first crystallize the concepts by relating these probes to our application. The data tier contains one Redis container. This container is alive if it accepts TCP connections. The Redis container is ready if it responds to Redis commands such as get or ping. There is a small but important difference between the two. A server maybe alive but not necessarily ready to handle incoming requests. The API server is alive if it accepts HTTP requests but the API server is only ready if it is online and has a connection to Redis to request and increment the counter. The sample application has a path for each of these probes. The counter and poller containers are live and ready if they can make HTTP requests back to the API server. Let's apply this knowledge to the deployment templates. We will go in the same order we just discussed but skip the support tier because the server demonstrates the same functionality.

 

We’ll start by creating a probes namespace to isolate the resources in this lesson.

kubectl create -f 7.1.yaml 

 

Now take a look at this comparison that shows the addition of a name for the port and the probes are the only changes to the data tier deployment. The liveness probe uses the TCP socket type of probe in this example. By using a named port we can simply write the name rather than the port number. That protects us in the future if the port number ever changes and someone forgets to update the probe port number. Also set the initial delay seconds to give the Redis server an adequate time to start. We can also configure failure thresholds, delays and timeouts for all probes. The default value work well now for this example. You can reference the Kubernetes documentation for the complete information. Next the readiness probe uses the exec type of probe to specify a command. This runs the command inside the container similar to docker exec if you've used that before. The redis-cli ping command tests if the server is up and ready to actually process Redis specific commands. Commands are specified as lists of strings. Also set the initial delay seconds. Given the consequence of failing a liveness probe is to restart a pod, it’s generally advisable to have the liveness probe at a higher delay than the readiness probe. I’ll also point out that by default 3 sequential probes need to fail before a probe is marked as failed. So there is some buffer built in. Kubernetes won’t immediately restart the pod the first time a probe fails, unless you configure it that way.

The particular delay values depend on your application and how long it reasonably requires to start up. 5 seconds should be enough to start checking readiness. By default we only need to pass a single probe before any traffic is sent to the pod, having the readiness initial delay too high will prevent pods that are able to handle traffic from receiving any. 

 

Let’s create the new and improved data tier

kubectl create -f 7.2.yaml -n probes

Now we can watch the get output for the deployment to observe the impact of the probes

kubectl get deployments -n probes -w

The -w watch option is especially handy for this case. Note the ready column. This will show one of one replicas when the readiness check passes. With the watch option new changes are appended to the bottom of the output, so we can see from the bottom line that the pod transitions to the ready after the number of seconds shown in the AGE column in the bottom line of output.

Watch the deployment for a while to make sure things stay running. If no new lines appear, there are no changes and everything has stayed up and running. If something did go awry, I’d recommend using a combination of the describe and logs commands to debug the issue. Unfortunately failed probe events don’t show in the events output but you can use the pod restart count as an indicator of failed liveness probes. But logs are the most direct way to get at them. We will add some debug logging to the server so that you can see all the incoming probe requests next.

 

On to the app tier. Notice the debug environment variable has been added which will cause all the server’s requests to be logged. Note that this environment variable is specific to the sample application and not a general purpose setting. Further down both probes are declared and this time they are http get probes. They send requests to endpoints built into the server specifically for checking its health. The liveness probe endpoint does not actually communicate with Redis. It’s actually a dummy that returns a 200 OK response for all requests. The readiness probe endpoint checks that the data tier is available. Also set the initial delay seconds so the process has adequate time to start. 

Let’s create the app tier deployment

kubectl create -f 7.3.yaml -n probes

Watch the deployment like before to verify containers are alive and ready. 

kubectl get -n probes deployments app-tier -w

It may take some time to start the containers and wait for the initial delay seconds on the readiness probe. But after a short delay the replica is ready. 

Now let's stream some logs to see what's happening behind the scenes. 

 

First get the pods to find a pod in the deployment

kubectl get -n probes pods 

Then use kubectl logs with the -f option to follow the log stream and I’ll use cut to filter narrow in on what’s important for us

kubectl logs -n probes app-tier-... |  cut -d' ' -f5,8-11

We can see that Kubernetes is firing both probes in 10 second intervals. With the help of these probes Kubernetes can take pods out of service when they aren’t ready and restart them when they enter a broken state. 

 

To summarize what we saw in this lesson

Containers in pods can declare readiness probes to allow Kubernetes to monitor when they are ready to serve traffic and when they should temporarily be taken out of service.

Containers in pods can declare liveness probes to allow Kubernetes to detect when they have entered a broken state and the pod should be restarted.

Both types of probes have the same format in manifest files and can make use of either command, http get, or tcp socket probe types.

 

Remember that probes kick in after containers are started. If you need to test or prepare things before the containers start, there is a way to do that as well. That is the role of init containers and it is the subject of our next lesson. I’ll meet you there.

About the Author

Students38463
Labs101
Courses11
Learning paths9

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.

Covered Topics