Course Introduction
Overview of Kubernetes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
Course Conclusion
The course is part of these learning paths
See 6 moreKubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
Learning Objectives
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
Intended Audience
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
Prerequisites
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
Source Code
The source files used in this course are available here:
Updates
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics
The previous lesson covered deployment rollouts. Kubernetes assumes that a Pod was ready as the container was started, but that's not always true. For example, if the container needs time to warm up Kubernetes should wait before sending any traffic to the new Pod. It's also possible that a Pod is fully operational but after some time it becomes non-responsive. For example, if it enters a deadlock state, Kubernetes shouldn't send any more requests to that Pod and will be better off to restart a new Pod.
Kubernetes provides probes to remedy both of these scenarios and probes are sometimes referred to as health checks. The first type of probe are readiness checks. They are used to probe when a Pod is ready to serve traffic. As I mentioned before, often a Pod is not ready after its containers have just started. They may need time to warm caches or load configurations.
Readiness probes can monitor the containers until they are ready to serve traffic. But readiness probes are also useful long after startup. For example, if the Pod depends on an external service and as service goes down, it's not worth sending traffic to that Pod since it can't complete it until the external service is back.
Readiness probes control the ready condition of a Pod. If a readiness probe succeeds, the ready condition is true, else, it is false. Services use the ready condition to determine if the Pod should be sent traffic. In this way, probes integrate with services to ensure that traffic doesn't flow to Pods that aren't ready. This is a familiar concept if you've used a cloud load balancer. Back in instances that fail health checks are not served traffic, just as services won't serve traffic to Pods that aren't ready.
Services are our load balancers in Kubernetes. The second type of probe is called a liveness probe. They are used to detect when a Pod has entered a broken state and can no longer serve traffic. In this case, Kubernetes will restart the Pod for you. That is the key difference between these two types of probes. Readiness probes determine when a service can send traffic to a Pod because it is temporarily not ready and a liveness probe decides when a Pod should be restarted because it won't come back to life. You declare both probes in the same way. You just have to decide which course of action is appropriate if a probe fails. Stop serving traffic or restart.
Probes can be declared on containers in a Pod. All of the Pod's container probes must pass for the Pod to pass. You can define any of the following as the action probe to check the container. A simple command that runs inside of a container, an HTTP GET request or the opening of a TCP socket. The command probes succeeds if the exit code of the command is zero, else, it will fail. A GET request succeeds if the response code is between 200 and 399. A TCP socket probes succeeds if a connection can be established. By default, the probes check the Pods every 10 seconds.
Our objective in the hands-on part of this lesson is to test our containers using probes. Specifically, we will add readiness and liveness probes to our application. We will use the application of manifest from the deployments lesson as the base of our work in this lesson. But before we get started creating probes, let's first crystallize the concepts by relating these probes to our application. The data tier contains one redis container. This container is alive if it accepts TCP connections. The redis container is ready, if it responds to redis commands such as get or ping.
There is a small but important difference between the two. A server may be alive but not necessarily ready to handle incoming requests. The API server is alive if it accepts HTTP request but the API server is only ready if it is online and has a connection to redis to request an increment, the counter. The sample application has a path for each of these probes. The counter and polar containers are live and ready if they can make an HTTP request back to the API server.
So let's apply this knowledge to the deployment templates. We will go in the same order we just discussed but skip the support tier because the server demonstrates the same functionality. Let's start by creating the probes in the namespace to isolate the resources in this lesson. Now take a look at this comparison that shows the addition of a name for the port and that the probes are the only changes to the data to your deployment. The liveliness probe uses the TCP socket type of the probe in this example, and by using a named port, we can simply write the name rather than the port number. This will protect us in the future if the port number ever changes and someone forgets to update the probe port number. Also, by setting the initial delay seconds, we give the redis server an adequate time to start.
We can also configure failure threshold, delays and timeouts for all probes. The default value will work for this example but you can reference Kubernetes documentation for more information on different values. Next, the readiness probe uses the exact type of probe to specify command. What this does, is runs a command inside the container similar to docker exec if you've used that before. The redis-cli ping command test if the server is up and is ready to actually process redis specific commands. Commands are specified as a list of strings.
Given the consequences of failing a liveness probe is going to be restarting a Pod. It's generally advisable to have the liveness probe at a high delay than the readiness probe. I'll also point out that by default three sequential probes need to fail before a probe is marked as failed, so that we have some buffer. Kubernetes won't immediately restart the Pod the first time the probe fails, but we can configure it that way if we need to.
The particular delay depends on our application and how long it reasonably requires to start up. Five seconds should be more than enough to start checking readiness. And by default, we only need to pass a single probe before any traffic is sent to the Pod. Having the readiness initial delay too high will prevent Pods that are able to handle traffic from receiving any. So let's create the new and improved data tier.
Now we can watch the GET output for the deployment to observe the impact of the probes. Note the ready column. This will show one of one replicas when the readiness check passes. With the watch option, new changes are appended to the bottom of the output. So we can see from the bottom line, the Pod transitions to the ready state after the number of seconds in the age column in the bottom line of the output. Watch the deployment for a while to make sure things are running smoothly. If no new lines appear, there are no changes and everything has stayed up and running. However, if something did go awry, I'd recommend using a combination of the described and logs commands to debug the issue.
Unfortunately, failed probe events don't show in the events output but you can use the Pod restart counter as an indicator of failed liveness probes. Logs are always the best direct way to get at them. We will add some debug logging to the service so that you can see all the incoming probe requests after this. Onto the app tier. Notice that the debug environment variable has been added which will cause all the service requests to be logged.
Note that this environment variable is specific to the sample application and not for general purpose settings. Further down, probes are declared and this time they are HTTP GET probes. They send request to end points built at the server specifically for checking its health. The liveness probe endpoint does not actually communicate with redis. It's a dummy that will always return 200 okay as its response for every request. Your readiness probe endpoint checks that the data tier is available. We're also gonna be setting the initial delay seconds so the process has adequate time to start.
So let's create the app tier deployment. And we'll subsequently watch that deployment to verify containers are alive and ready. It may take some time to start the containers and wait for the initial delay seconds on the readiness probe. But after a short delay, the replica will be ready. We can now stream the logs to see what's happening behind the scenes.
First, we're to be getting the Pods to find a Pod in the deployment, then use kubectl logs with the dash f option to follow the log stream. And I'll use cut to filter down what is going on. We can see that Kubernetes is firing both probes in 10-second intervals. With the help of these probes, communities can take Pods out of service when they aren't ready and restart them when they enter a broken state.
To summarize what we saw in this lesson, containers in Pods can declare readiness probes to allow Kubernetes to monitor when they're ready to serve traffic and when they should temporarily be taken out of service. Containers in Pods can declare a liveliness probes to allow Kubernetes to detect when they have entered a broken state and the Pod should be restarted.
Both types of probes have the same format and manifest files and can make use of either command, HTTP GET or TCP socket probe types. Remember that probes kick in after containers are started. If you need to test or prepare things before the container start, there is a way to do that as well. And that is the role of init containers and it is a subject of our next lesson. I'll see you there.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.