Course Introduction
Overview of Kubernetes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
Course Conclusion
The course is part of these learning paths
See 6 moreKubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
Learning Objectives
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
Intended Audience
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
Prerequisites
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
Source Code
The source files used in this course are available here:
Updates
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics
This lesson will introduce you to working with Kubernetes cluster and we're going to be specifically focusing on pods. By doing so, you will see first-hand patterns used by kubectl and some examples of manifest files. But first, let's review the theory. Pods are the basic building block in Kubernetes. Pods contain one or more containers and we're going to be sticking with one container per pod in this lesson but we'll be talking about multi-container pods later.
All pods share a container network that allows any pod to communicate with any other pod, regardless of the nodes that the pods are running on. Each pod gets a single IP address in the container network so Kubernetes will do all the heavy lifting and make that happen. You get to work with the simple abstraction. All pods can communicate with each other and that each pod has one IP address.
Because pods include containers, the declaration of a pod includes all the properties that you would expect for example with Docker rhyme. These include the container image, any ports you want to publish to allow access to the container, choosing a recent policy to determine if a pod should automatically restart, when its container fails, and limits on the CPU and memory resources but there are also a variety of other properties that are specific to pods in Kubernetes. We're going to be seeing many examples of those in the coming lessons.
All of the desired properties are written in a manifest file. Manifest files are used to describe all kinds of resources in Kubernetes, not only pods. Based on the kind of resource that the manifest file describes, you will configure different properties of that file. The configuration specific to each kind of resource is referred to as its specification or spec. The manifests are sent to the Kubernetes API server where the necessary actions are taken to realize what is described in the manifest. You will use kubectl to send a manifest to the API server and one way of doing this is with the kubectl create command.
For pod manifests, the cluster will take the following actions. Selecting a node with available resources for all of the pods' containers, scheduling the pod to that node. The node will then download the pod's container images and then subsequently run the containers. There are more steps involved but that is more than enough to get the idea. We mentioned before that kubectl also provides sub-commands to directly create resources without manifests.
It's usually a good idea to stick with manifests for several reasons. You can check in your manifests into a source control system to check their history and rollback when needed. It makes it easy to share your work such that it can be created in other clusters and it's also easier to work with compared to stringing together sequences of commands with many options to achieve the same result. So we're going to be sticking with manifests for this course.
Now that we're ready to see all this in action using kubectl and a Kubernetes cluster, our goal will be to deploy an Nginx web server using a Kubernetes pod. If you are using the Introduction to Kubernetes Playground, follow the instructions to the EC2 instance to connect to the Bastion or feel free to connect using a local terminal like I am. If you use a different solution for a Kubernetes cluster, simply follow their provided instructions to make sure kubectl can talk to the cluster.
So I'm here at my terminal, connected to the Bastion host which has kubectl configured to talk to the lab cluster. To confirm that kubectl has configured to talk to the cluster, we first can enter our first few kubectl commands. Kubectl get pods. The output tells us that no pod resources were found in the default name space of the cluster. If it wasn't able to connect to the API server, you would have seen an error message instead so everything looks good.
Let's start with a minimal example of a pod manifest to get a taste for manifests. We'll gradually build them up as we go. I've prepared the 1.1 basic pod .yaml file for this. All the course files are preloaded into the source directory on the lab instance and also available on the course get hub repo. This manifest declares a pod with one container that uses the Nginx latest image. All manifests have the same top level keys, API version, kind, and metadata followed by the spec.
Kubernetes supports multiple API versions and version one is the core API version containing many of the most common resources such as pods and nodes. Kind indicates what the resource is. Metadata then includes information relevant to the resource that can help identify resources. The minimum amount of metadata is a name which is set to my pod.
Names must be unique within a Kubernetes name space and spec is specification with a clear kind and must match what is expected by the defined API version. For example, the spec can change between the beta and the generally available API version of a resource. The spec is essentially where all of the meat goes. You can refer to the official API docs for complete info on all versions and supported fields. I'll explain the ones that we need for this course but know that there are far more left to discover.
The pod spec defines the containers in the pod. The minimum required field is a single container which must declare its image and name. This pod only has a single container but the yaml is a list allowing you to specify more than one. Back at the command line, we can create the pod by changing into the source directory with CD source. We then issue kubectl create -f 1.1-basic_pod.yaml. The f option tells us to create, that the create command, is going to be creating a manifest from a file. For any kubectl command, you can always depend --help to display the help page to get more information.
Now if we run kubectl get pods, we can see my pod is running. My pod is technically an object of a pod kind of resource but it is common to simply use resource to also describe the object as well as the kind. Kubectl shows the name, the number of running containers, the pod state, restarts, and the age of the pod in the cluster. You should memorize the get commands since you'll use it all the time and I really mean all of the time.
Let's see some more detailed information about this particular pod. Using the describe command to get complete information. Kubectl describe pod and we're going to pipe it to more. Describe takes a resource kind just like get and to narrow in on specific resources of that kind, we add the name which you can also do with get. We're going to be piping the output to more so we can press space bar to page throughout this output.
As you can see, there's a lot more information than what get provides. The name, name space, and the node running the pod are given at the top along with other metadata. Also note that a pod is assigned an IP. No matter how many containers we include, there would be only one IP. In the containers section, we can see that the image and whether or not the container is ready. You can also the port and the container port are both set to none.
Ports are part of the container spec but Kubernetes assigns default values for us. Just like Docker, you need to tell Kubernetes which port to publish if you want it to be accessible. We'll have to go back and declare our port after this. Otherwise, nothing is going to reach the web server and at the bottom, we have our events section. It lists the most recent events related to the resource. You can see that the steps Kubernetes took to start the pod from scheduling on the container image to starting the container. The events section is shared by most kinds of resources when you use describe is very helpful for debugging.
Let's tell Kubernetes which port to publish to allow access to the web server. I've prepared the 1.2 port file specifically for that. Compared to the 1.1 file, we can see the ports mapping is added and the container port field is set to 80 for HTTP. Kubernetes is also going to be using TCP as the protocol by default and we'll assign it an available host port automatically so that we don't need to declare anything more.
Kubernetes can apply certain changes to different kinds of resources on the fly. Unfortunately, Kubernetes cannot update ports on a running pod so we need to delete the pod and recreate it. We're going to be running our kubectl delete pod my pod to delete this pod. You can also specify with the -f with referencing to the 1.1 file and Kubernetes will delete all of the resources declared in that file.
Now, we can issue the command kubectl create -f 1.2.yaml. And describe the pod again. You don't need to describe the pod every single time. I just prefer to do this to see the result of my work and make sure that everything went as I expected. Now we can see that port 80 is given as the port so you may think to try to send a request to port 80 on that noted IP but it still won't work. Why do you think that is? Well the pod's IP is on the container network and this lab instance is not part of the container network so it won't work. But if we sent the request from a container in a Kubernetes pod, the request would succeed since pods can communicate with all other pods by default. We'll see how we can access the web server from the lab instance in the next lesson.
Before we move on, I want to cover a couple more points and the first is shared between all resources and the second is specific to pods. In the describe section, you might have seen the labels field was set to none. Labels are key value pairs that identify resource attributes. For example, the application tier, whether it's front end or back end or maybe a region such as US East or US West.
In addition to providing meaningful and identifying information, labels are used to make selections in Kubernetes. For example, you could tell kubectl to get only resources in the US West region. So our 1.3 manifest has a label added to identify the type of app that the pod is a part of. We're using an Nginx web server and the label value is web server. You could have multiple labels but one is enough in this example.
Our last point that I want to make pertinent is that Kubernetes can schedule pods based on their resource requests. The pods that we've seen so far don't have any resource requests set which makes it easier to schedule them because the scheduler doesn't need to find nodes that have these requests for amounts of resources. It'll just throw them onto any node that isn't under pressure or starved of resources. However, these pods will be the first to be evicted if a node becomes under pressure, it needs to free up resources. That's called best effort quality of service which was displayed in the describe output. Best effort pods can also create resource contention with other pods on the same node and usually it's a good idea to set resource requests.
In the 1.4 yaml, I've set a resource request and limit for the pod's container. Request sets the minimum required resources to schedule the pod onto a node and the limit is the maximum amount of resources you want the node to ever give the pod. You can set resource requests and limits for each container. There's also support for requesting amounts with local disk by using the ephemeral storage.
When we create this pod, kubectl delete pod by pod and then subsequently kubectl -f 1.4, the pod will be guaranteed the resources you requested or it won't be scheduled until those resources are available. Kubectl describe my pod will now list our pod and we'll see this guaranteed quality of service. You need to do some benchmarking to configure a reasonable request and limit but the effort is well worth it to ensure your pods have the resources they need and best utilization of the resources in the cluster. This is one of the reasons why we are using containers in the first place.
For the rest of this course, we will use best effort pods since we won't have any specific resource requirements in mind. This isn't something you should do in protection environments, however. We've covered a lot in this lesson so let's review what we covered. Pods are the basic building block in Kubernetes and contain one or more containers. You declare pods and other resources in manifest files. All manifests share an API version, kind, and metadata related to that resource. Metadata must include a name but labels are usually a good idea to also help you further filter down your resources.
Manifests also include a spec to configure the unique parts of each resource kind. Pod specs include the list of containers, which must specify our container name and image, but is often useful to set the resource requests and limits. We're going to see more fields of pod specs in later lessons.
In our next lesson, we're going to be making the web server running in the pods accessible from our lab VN with services. I'll see you there.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.