Course Introduction
Overview of Kubernetes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
Course Conclusion
The course is part of these learning paths
See 6 moreKubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
Learning Objectives
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
Intended Audience
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
Prerequisites
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
Source Code
The source files used in this course are available here:
Updates
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics
The previous lessons have created pods directly, but I've gotta be honest with you, we've kind of been cheating a bit so far. You're not really supposed to create pods directly. Instead, a pod is really just a building block. They should be created via a higher level abstraction such as deployments. This way, Kubernetes can add on useful features and higher level concepts to make your life easier.
This lesson is gonna be covering the basics of the deployments with the following lessons covering auto-scaling and rolling updates. So let's start by covering some theory, and then we'll see deployments in action. A deployment represents multiple replicas of a pod. Pods in their deployment are identical, and within a deployment's manifest, you embed a pod template that has the same fields as this pod spec that we have written before.
So you describe a state in the deployment, for example, five pod replicas of Redis version five, and Kubernetes takes the steps required to bring the actual state of the cluster to that desired state that you've specified. If for some reason one of the five replica pods is deleted, Kubernetes will automatically create a new one to replace it. You can also modify the desired state and Kubernetes will converge the actual state to that desired state. We'll see a bit of that in this lesson with more on updates in a later lesson. The Kubernetes master components include a deployment controller that takes care of managing the deployment.
Now, let's see how this works in practice. We'll use our microservices three tier application to demonstrate deployments and we'll replace the individual pods with a deployments that manage the pods for us. Let's start by creating a new namespace called deployments for this lesson. So we're gonna do that with kubectl create dash f 5.1 yaml, just like we've done previously. Now, a deployment is a template for creating pods.
A template is used to create replicas, and a replica is a copy of a pod. Applications scale by creating more replicas. This will be more clear when you just see the YAML files and as we demonstrate more features throughout this lesson. Now, I'm comparing the data tier manifest from the last lesson to our current manifest that uses deployments. I wanna highlight how there are significant similarities, but just a few changes, and the first change is the API version is now apps version one. Higher level abstractions for managing applications are in their own API group and not part of the core API. The kind is set to deployment, and our metadata from the last lesson is directly applied to said deployment.
Next comes a spec. The deployment spec contains deployment-specific settings and also a pod template, which has exactly the same pod spec as the last lesson in it. It in the deployment-specific section, the replica key sets how many pods to create for this particular deployment. Kubernetes will keep this number of pods running. We set the value to one because there cannot be multiple Redis containers. We'll have one Redis pod.
Next, there's the selector mapping. Just like we saw with services, deployments use label selectors to group pods that are in the deployment. The match labels mapping should overlap with the labels declared in the pod template below, and kubectrl will complain if they don't overlap. The pod template metadata includes labels on the pods.
Note that the metadata doesn't need a name in the template because Kubernetes generates unique names for each pod in the deployment. Similar changes are made to the app tier manifest and the support tier manifest, mainly adding a selector in a template for the deployment. We can complete the same process for the app and support tiers, also setting replicas to one for both cases.
One is actually the default, so it isn't strictly required, but it does emphasize that a deployment manages a group of identical replicas. So let's create the tiers now. Be sure to set the deployments namespace, and we'll use multiple f options to create them all in one go with kubectrl create namespace deployments f 5.2, 5.3, 5.4 YAMLs.
Now let's get our deployments with kubectrl get namespace deployments deployments. Kubectrl displays three deployments and the replica information. Note that they all show one replica right now. So remember that horrible scenario I described at the end of the last lesson? Well, we can see how deployments solve the problem by asking kades for the pods.
Note that each pod has a hash at the end of it. Deployments add this uniqueness to the names, automatically allowing us to identify pods of a particular deployment version. We can see how this works by running more than one replica in a deployment. We'll use kubectrl scale command for modifying replica counts. We'll scale the number of replicas in the support tier to five, which will cause the counter to increase five times more quickly. The scale command is equivalent to editing the replica value in the manifest file and then running kubectrl apply to apply the change. It's just optimized for this one-off use case.
Now, if we run kubectrl get pods in the namespace of deployments, we can see the pods again to see what happened. Note that the support tier pods continue to show two of two ready containers. This is because replicas replicate pods, not individual containers inside of a pod. Deployments ensure that the specified number of replica pods are kept running. So we can test this by deleting some pods with kubectrl delete, the namespace deployments pods support tier, and then the hash. And now watch as Kubernetes brings them back to life.
All right, so Kubernetes can resurrect pods and make sure the application runs the intended number of pods. As a side note, I used the Linux watch command with the dash n1 option to update the output every one second. Kubectrl also supports watching by using the w option and any changes are appended to the bottom of the output compared to overriding the entire output with the Linux watch command. You might prefer one over the other depending on what you're watching. But let's go ahead and scale out the app tier to five replicas, as well, with kubectrl scale namespace deployments deployment app tier replicas five.
Now let's get the list of pods with kubectrl namespace deployments get pods. As you can see, Kubernetes makes it really quite painless. They did all the heavy lifting for us, and now we can confirm that the app tier service is load balancing requests across the app tier pods by describing the service, kubectrl describe namespace deployments service app tier. And now observe that the service now has five endpoints matching the number of pods in said deployment. Thanks to label selectors, the deployment and the service are able to track the pods in the tier.
Let's review what we've done in this lesson. We've used deployments to have Kubernetes manage the pods in each application tier. By using deployments, we get the benefits of having Kubernetes monitor the actual number of pods and converge to our specified desired state. We also saw how we can use kubectrl scale to modify the desired number of replicas in Kubernetes. This will do what it takes to realize the number of replicas we specify. We also saw how it seamlessly integrates with services that load balance across the deployments' pods.
A word of caution with scaling deployments is that you should make sure that the pods you are working with support horizontal scaling. That usually means that the pods are stateless as opposed to stateful. The data for the app tier is stored in the data tier, and we could add as many app tier pods as we like because the state of the application is stored inside of the data tier.
With our current setup, we can't scale the data tier out 'cause that would create multiple copies of the application counter. However, if we never scale the data tier, we still get the benefit of having Kubernetes return the data tier to its desired state by using deployments. We also get more benefits when it comes to the performing of updates and rollbacks, which we'll see in a couple of lessons. So it still makes sense to use a deployment for the data tier. We rarely should be directly creating pods.
Kubernetes has even more tricks up its sleeve when it comes to scaling. We arbitrarily scaled the deployment, but in practice, you would like to scale based on CPU load or some other metric to react to the current state of the system to make the best use of available resources. So let's see how to do that in the next lesson.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.