Overview of Kuberentes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
The course is part of these learning pathsSee 2 more
Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
The source files used in this course are available in the course's GitHub repository.
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
The previous lessons have created pods directly, but I've got to be honest with you, we've kind of been cheating a bit so far. You're not really supposed to create pods directly. Instead, a pod is really just a building block. They should be created via higher level abstractions, such as deployments. This way, Kubernetes can add on useful features and higher level concepts to make your life easier. This lesson covers the basics of deployments with the following lessons covering autoscaling and rolling updates.
Let’s start by covering the theory and then we’ll see deployments in action. A deployment represents multiple replicas of a pod. Pods in a deployment are identical and within a deployment’s manifest you embed a pod template that has the same fields as a pod spec that we have written before. You describe a desired state in the deployment, for example 5 pod replicas of redis version 5, and Kubernetes takes the steps required to bring the actual state of the cluster to the desired state you specify at a controlled rate. If for some reason one of the 5 replica pods is deleted, Kubernetes will automatically create a new one to replace it. You can also modify the desired state and kubernetes will converge the actual state to the desired state. We’ll see a bit of that in this lesson with more on updates in a later lesson. The kubernetes master components include a deployment controller that takes care of managing the deployment.
Now let’s see how this works in practice. We’ll use our microservices 3-tier application to demonstrate deployments. We’ll replace the individual pods with deployments that manage the pods for us.
We’ll start by creating a new namespace called deployments for this lesson.
kubectl create -f 5.1.yaml
just like we've done in the last lesson.
Now let’s see how
A deployment is a template for creating pods. A template is used to create replicas. A replica is just a copy of a pod. Applications scale by creating more replicas. This will be more clear when you see the YAML files and as we demonstrate more features throughout this lesson.
Now I am comparing the data tier manifest from the last lesson to our current manifest that uses deployments. I want to highlight how there are significant similarities with just a few changes. The first change is that the apiVersion is now apps/v1. Higher level abstractions for managing applications are in their own API group and not part of the core API. The kind is set to Deployment. The metadata from the last lesson is directly applied to the deployment. Next comes the spec. The deployment spec contains deployment-specific settings, and also a pod template which has exactly the same pod spec as last lesson in it. In the deployment specific section. The replicas key sets how many pods to create for this particular deployment. Kubernetes will keep this number of pods running. Set the value to 1 because there cannot be multiple redis containers. Next there is a selector mapping. Just like we saw with services, deployments use label selectors to group pods that are in the deployment. The match labels mapping should overlap with the labels declared in the pod template below. kubectl will complain if they don’t overlap. The pod template metadata includes labels on the pods. Note that the metadata doesn’t need a name in the template because Kubernetes generates unique names for each pod in the deployment.
Similar changes are made to the app tier manifest and the support tier manifest. Mainly adding a selector and template for the deployment.
We can complete the same process for the app and support tiers. Also set replicas to 1 for both cases. 1 is actually the default value so it isn’t strictly required but it does emphasize that a deployment manages a group of identical replicas.
Let’s create the tiers now. I’ll be sure to set the deployments namespace and will use multiple -f options to create them all in one go
kubectl create -n deployments -f 5.2.yaml -f 5.3.yaml -f 5.4.yaml
Now get the deployments
kubectl get -n deployments deployments
kubectl displays three deployments and their replica information. Note that they all show one replica right now. So remember that horrible scenario I described at the end of the last lesson with peppering v1 and all that onto the end of the pods? Well we can see how deployments solve the same problem by asking K8s for the pods.
kubectl -n deployments get pods
Note that each pod has hash at the end of it. The deployment adds uniqueness to the names automatically to identify pods of a particular deployment version. We can see how this works by running more than one replica in a deployment. We’ll use kubectl scale command for modifying replica counts. We’ll scale the number of replicas in the support tier to 5 which will cause the counter to increase five times more quickly.
kubectl scale -n deployments deployment support-tier --replicas=5
The scale command is equivalent to editing replica value in the manifest file, and then running kubectl apply to apply the change. It's just optimized for this one off-use case.
Now we can check the pods again to see what happened
kubectl -n deployments get pods
Note that the support tier pods continue to show two of two ready containers. This is because replicas replicate pods, not individual containers inside of a pod. Deployments ensure that the specified number of replica pods are kept running. So we can test this by deleting some pods
kubectl delete -n deployments pods support-tier-... support-tier-... --wait=false
and watch as kubernetes brings them back to life
watch -n 1 kubectl -n deployments get pods
Alright, so K8s can resurrect pods and make sure the application runs the intended number of pods. As a side note I used the linux watch command with the -n 1 option to update the output every 1 second. kubectl get also supports watching by using the -w option and any changes are appended to the bottom of the output compared to overwriting the entire output with the linux watch command. You might prefer one over the other depending on what you are watching.
Now let's scale out the app tier to 5 replicas as well.
kubectl scale -n deployments deployment app-tier --replicas=5
And get the list of pods
kubectl -n deployments get pods
Kubernetes makes it really quite painless. Kubernetes did all the heavy lifting for us. Now let’s confirm that the app tier service is load balancing requests across the app tier pods. Describe the service
kubectl describe -n deployments service app-tier
And observe the service now has five endpoints matching the number of pods in the deployment. Thanks to label selectors the deployment and the service are able to track the pods in the tier.
Let’s review what we’ve done in this lesson.
We used deployments to have kubernetes manage the pods in each application tier. By using deployments we get the benefits of having kubernetes monitor the actual number of pods and converge to our specified desired state.
We also saw how we can use
To modify the desired number of replicas and kubernetes does what it takes to realize that desired state. We also saw how it seamlessly integrates with services that load balance across the deployment’s pods.
One word of caution with scaling deployments is that you should make sure that the pods you are working with support horizontal scaling. That usually means that the pods are stateless as opposed to stateful. The data for the app tier is stored in the data tier and we can add as many app tier pods as we like because the state of the application is stored in the data tier. With our current setup we can’t scale the data tier out because that would create multiple copies of the application counter. However even if we never scale the data tier we still get the benefit of having kubernetes return the data tier to its desired state by using deployments. We also get more benefits when it comes to performing updates and rollbacks which we’ll see in a couple of lessons. So it still makes sense to use a deployment for the data tier and we rarely should directly be creating pods.
Kubernetes has even more tricks up its sleeve when it comes to scaling. We arbitrarily scaled the deployment but in practice you would like to scale based on cpu load or some other metric to react to the current state of the system to make the best use of available resources. Let’s see how to do that in the next lesson.
About the Author
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.