The course is part of these learning paths
Overview of Kuberentes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
The source files used in this course are available in the course's GitHub repository.
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
Course GitHub repository: https://github.com/cloudacademy/intro-to-k8s
This lesson will introduce you to working with a Kubernetes cluster and we will specifically focus on pods. By doing so you will see first-hand the patterns used by kubectl and some examples of manifest files. But first let’s review the theory.
Pods are the basic building block in kubernetes. Pods contain one or more containers. We’ll stick with one container per pod in this lesson and talk about multi-container pods later. All pods share a container network that allows any pod to communicate with any other pod, regardless of the nodes the pods are running on. Each pod gets a single IP address in the container network. Kubernetes does all the heavy-lifting to make that happen. You get to work with the simple abstraction of all pods can communicate with each other and each pod has one IP address.
Because pods include containers, the declaration of a pod includes all of the properties that you would expect for running containers, for example with docker run. These include the container image, any ports you want to publish to allow access into the container, choosing a restart policy to determine if a pod should automatically restart when its container fails, and limits on the cpu and memory resources. But there are also a variety of other properties that are specific to pods and kubernetes. We will see many examples of those in coming lessons.
All of the desired properties are written to a manifest file. Manifest files are used to describe all kinds of resources in Kubernetes, not only pods. Based on the kind of resource the manifest describes, you will configure different properties. The configuration specific to each kind of resource is referred to as its specification or spec.
The manifests are sent to the Kubernetes API server where the necessary actions are taken to realize what is described in the manifest. You will use kubectl to send the manifest to the API server. One way of doing this is with the kubectl create command. For pod manifests, the cluster will take the following actions:
selecting a node with available resources for all of the pod’s containers,
scheduling the pod to that node,
the node then downloads the pod’s container images
And runs the containers.
There are more steps involved but that is enough to get the idea.
We mentioned before that kubectl also provides subcommands to directly create resources without using manifests. It’s usually a good idea to stick with manifests for several reasons:
You can check in your manifests in to source control systems to track their history and roll back when needed
It makes it easy to share your work so it can be created in other clusters
It’s also easier to work with compared to stringing together sequences of commands with many options to achieve the same result
We will stick with manifests for this course.
Now we’re ready to see all of this in action using kubectl and a Kubernetes cluster. Our goal will be to deploy an Nginx web server running in a Kubernetes pod. If you are using the Introduction to Kubernetes Playground Lab follow the instructions to ec2 instance connect to the bastion or feel free to connect using a local terminal like I am. If you use a different solution for a Kuberentes cluster, simply follow their provided instructions to make sure kubectl can talk to the cluster.
I’m here at my terminal connected into the bastion host which has kubectl configured to talk to the Lab cluster. To confirm kubectl is configured to talk to the cluster, we can enter our first kubectl command
kubectl get pods
The output tells us that no pod resources were found in the default namespace in the cluster. If it wasn’t able to connect to the API server, you would have seen an error message, so everything looks good.
Let’s start with a minimal example of a Pod manifest to get a taste for manifests. We’ll gradually build it up as we go. I’ve prepared the 1.1 basic pod.yaml file for this. All of the course files are pre-loaded into the src directory on the Lab instance and also available on the course github repository. I’ll use visual studio code throughout the course to view and compare the manifests, but you could also use any of the console editors for viewing such as vi. This manifest declares a pod with one container that uses the nginx:latest image. All manifests have the same top-level keys: api version, kind, and metadata followed by the spec. Kubernetes supports multiple api versions. V1 is the core api version containing many of the most common resources such as pods and nodes. Kind indicates what the resource is. Metadata includes information relevant to the resource and can help identify resources. The minimum amount of metadata is a name which is set to mypod. Names must be unique within a Kubernetes namespace. Spec is the specification for the declared kind and must match what is expected by the defined api version, for example the spec can change between the beta and the generally available API version of a resource. The spec is essentially where all of the meat goes. You can refer to the official api docs for complete info on all versions and supported fields. I’ll explain the ones that we need for this course but know that there are more left to discover. The pod spec defines the containers in the pod. The minimum required field is a single container which must declare its image and name. This pod only has a single container but the YAML is a list allowing you to specify more than one.
Back at the command line, we can create the pod by changing into the src directory with
kubectl create -f 1.1-basic_pod.yaml
The -f option tells the create command you are using a manifest to declare the resources you want created. For any kubectl command, you can always append --help to display the help page to get more information. Now if we run
kubectl get pods
we can see mypod is running! Mypod is technically an object of a Pod kind of resource, but it is common to simply use resource to also describe the object as well as the kind. kubectl shows the name, number of ready containers, the pod’s state, restarts and also the age of all the pod in the cluster. Note that this pod only has one container. So ready shows one of one. You should memorize the get command since you'll use it all of the time. And I really mean all of the time.
Let's see some more detailed information about this particular pod. Use the describe command to get complete information
kubectl describe pod mypod | more
Describe takes a resource kind, just like get, and to narrow in on a specific resource of that kind, we add the name, which you can also do with get. I’ll pipe the output to more so we can press spacebar to page through the output.
As you can see there is a lot more information than what get provided. The name, namespace, and the node running the pod are given at the top along with other metadata. Also note the Pod is assigned an IP. No matter how many containers we included, there would be only one IP. In the containers section we can see the image and whether or not the container is ready. You can also see the port and container port are both none. Ports are part of the container spec but Kubernetes assigned default values for us. Just like with docker you need to tell kubernetes which port to publish if you want it to be accessible. We’ll have to go back and declare a port after this otherwise nothing is going to reach the web server.
At the bottom is the Events section. It lists the most recent events related to the resource. You can see the steps Kubernetes took to start the pod from scheduling, pulling the container image to starting the container. The events section is shared by most kinds of resources when you use describe and is very helpful for debugging..
Let’s tell Kubernetes which port to publish to allow access to the webserver. I’ve prepared 1.2port file for that. Compared to the 1.1 file we can see the ports mapping is added and the containerport field is set to 80 for HTTP. Kubernetes will use TCP as the protocol by default and will assign an available host port automatically so we don’t need to declare anything more.
Kubernetes can apply certain changes to different kinds of resources on the fly. Unfortunately Kubernetes can not update ports on a running pod. So we need to delete the pod and recreate it. Run
kubectl delete pod mypod
to delete the pod. You can also specify the -f with the 1.1 file and kubernetes would delete all the resources declared in that file. It is a bit clunky to have to delete and then recreate whenever a pod spec changes, but just bare with me for now. We’ll see a seamless way to manage such changes in later lessons. Now we can
kubectl create -f 1.2.yaml
And describe the pod again
kubectl describe pod mypod | more
You don't need to describe the pod every single time, I just prefer to do this to see the result of my work and make sure that everything went as I expected. We can see port 80 is given as the port. So now you may think to try to send a request to port 80 on that noted IP
but it still doesn’t work. Why do you think that is? Well, the pod’s Ip is on the container network. The Lab instance is not part of the container network so it won’t work. If however, you sent the request from a container in a Kubernetes pod, the request would succeed since pods can communicate with all other pods by default. We’ll see how we can access the web server from the Lab instance in the next lesson.
Before we move on I want to cover a couple more points, the first is shared between all resources and the second is specific to pods.
In the describe section you might have seen the labels field was set to none.
kubectl describe pod mypod | more
Labels are key-value pairs that identify resource attributes, for example the application tier whether it is front-end or backend, or a region such as us-east or us-west. In addition to providing meaningful identifying information, labels are used to make selections in Kubernetes. For example, you could tell kubectl to get only resources in the us-west region.
In the 1.3 manifest a label is added to identify the type of app the pod is part of. We’re using nginx as a web server so the label value is webserver. You can have multiple labels but one is enough for this example.
Quality of Service Classes
The last point I want to make in this lesson relates to how kubernetes can schedule pods based on their resource requests. The pods that we have seen so far didn’t set any resource request. That makes it easier to schedule them because the scheduler doesn’t need to find nodes with the requested amounts of resources. It will just schedule them onto any node that isn’t under pressure or starved for resources. However, these pods will be the first to be evicted if a node becomes under pressure and needs to free up resources.
That’s called best effort quality of service which was displayed in the describe output. BestEffort pods can also create resource contention with other pods on the same node and usually it is a good idea to set resource requests. In the 1.4 yaml file I have set a resource request and limit for the pod’s container. Request sets the minimum required resources to schedule the pod onto a node and the limit is the maximum amount of resources you want the node to ever give the pod. You can set resource requests and limits for each container. There is also support for requesting amounts local disk by using the ephemeral-storage.
When we create this pod
kubectl delete pod mypod
kubectl create -f 1.4
The pod will be guaranteed the resources you requested or it won’t be scheduled until those resources are available.
kubectl describe pod mypod
In the describe output you can see this is in the guaranteed quality of service. You need to do some benchmarking to configure a reasonable request and limit but the effort is well worth it to ensure your pods have the resources they need and to best utilize the resources in the cluster, which is one of the reasons you are using containers in the first place. For the rest of this course we will use best effort pods since we won’t have specific resource requirements in mind. This is not something you should do in production environments.
We covered a lot in this lesson so let’s review what we covered.
Pods are the basic building block in Kubernetes and contain one or more containers.
You declare pods and other resources in manifest files. All manifests share an api version, kind and metadata.
Metadata must include a name but labels are usually a good idea to help organize resources. Manifests also include a spec to configure the unique parts of each resource kind.
Pod specs include the list of containers which must specify a container name and image but it is often useful to set resource requests and limits. We will see more fields of pod specs in later lessons.
In the next lesson we will make the web server running in the pod accessible from the lab VM.
About the Author
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.