This Administering Kubernetes Clusters course covers the many networking and scheduling objectives of the Certified Kubernetes Administrator (CKA) exam curriculum.
You will learn a range of core practices such as Ninja kubectl
skills, the ability to control where pods are scheduled, how to manage resources for long-lasting production environments, and controlling access to applications in a cluster.
This is a 6 part course made up of four lectures. If you are not familiar with Kubernetes, we recommend completing the Introduction to Kubernetes course and the Deploy a Stateless Application in a Kubernetes Cluster Lab before taking this course.
Learning Objectives
- Analyze some pro tips on how to effectively use Kubectl. What you learn here will be useful for administering a cluster and using Kubernetes in general.
- Learn to be able to attract or repel pods from nodes or other pods. You can ensure pods run on nodes where they are intended to run and achieve other objectives such as high-availability by distributing pods across nodes.
- Learn to think about using Kubernetes for the long term when you need to consider how you’ll manage and update resources.
- Learn how to control internal and external access to applications running in a Kubernetes cluster.
Intended Audience
- Anyone that is interested in Kubernetes cluster administration. But many parts of this course appeal to a broader audience of Kubernetes users.
- Individuals that may benefit from taking this course include System Administrators, DevOps Engineers, Cluster Administrators, and Kubernetes Certification Examinees.
Prerequisites
To get the most from this course,
- Have knowledge of the core Kubernetes resources including pods, and deployments.
- Experience using the kubectl command-line tool to work with Kubernetes clusters.
- An understanding of YAML and JSON file formats. You’ll probably already have this skill if you have the prior two. When working with Kubernetes it won’t take long until YAML files make an appearance.
Update - From kubectl version 1.18 the kubectl run
command can no longer be used for creating deployments. kubectl create deployment
or manifest files can be used as alternatives. Also the --export
option of kubectl get
is no longer supported resulting in functional, but more verbose output manifests.
JSONPath Support in Kubernetes: https://kubernetes.io/docs/reference/kubectl/jsonpath/
Speaker 1: Welcome back. This is the first lesson where we'll get hands on with administering a Kubernetes cluster. This and the remaining lessons will focus on showing you first hand how to accomplish different tasks at the command line. If you spend any amount of time working with the Kubernetes cluster, you'll probably be issuing a lot of kubectl commands. This lesson is intended to give you some tips to increase your efficiency when working with kubectl.
This lesson will demonstrate enabling auto completions for kubectl or kubecontrol to up your productivity at the command line, how to get the most out of kubecontrol's get command, quick ways to generate resource manifest files with kubecontrol and how to use kubecontrol to give you information about resource specification fields in manifest files.
I'll be using a Kubernetes cluster I've stood up on Linux notes running in AWS. The cluster is the same as you use in Cloud Academy Lab environments and is also very similar to clusters in Kubernetes certification exam environments. You can follow along using any type of cluster. However, single note clusters spun up using minikube or enabling Kubernetes in Docker for Mac or Docker for Windows distributions will work just fine for this lesson. Now let's get started.
Any time you will be using more than a few commands with kubecontrol, you will probably enjoy having command completions enabled. It is not very difficult to do, but if you ever forget, kubecontrol can tell you how. All you really need to remember is that entering kubecontrol by itself lists all of the available commands, and completion is one of them.
To display the commands for enabling completions for different systems and shells, add the help option to the completion command. You can also use -h as a short form instead of spelling out help.
With kubecontrol, it is quite common to have examples in command help pages. It is tempting to hop over to your favorite search engine when you forget how to do something, but kubecontrol has a lot of answers as long as you know how to get at them.
I'm using Linux and the bash shell so this source command is what I need to enable completions for the current shell. To have completions enabled automatically every time a shell is created, add the command to your bash profile file. Now you can easily have commands auto completed by pressing tab or list the available commands and options by pressing tab twice if there isn't a single completion for what you've entered.
If I enter kubectl followed by tab twice, the available commands for kubecontrol are displayed. If I type g followed by tab, the only command starting with a g, get is completed. Completions will save you a fair amount of time and prevent typos. This is always useful, but especially if you are taking one of the time limited Kubernetes certification exams.
The get command is your go-to command for displaying resources in Kubernetes. I'll press tab twice to show the resources that can be shown with get. Let's say we're interested in nodes, I'll nodes to display all the notes in the cluster.
With completions enabled, it is easy to enter the names of resources with some tab magic, but you can also make use of the short names for resources. To list the short names for resources, you can enter kubecontrol api resources. The first column lists the full name and the second lists any short name if there is one. So if you aren't a fan of typing nodes, feel free to simply enter no. It's more beneficial considering the length of some resources such as certificate signing requests, which can be compressed down to CSR, a whopping savings of 23 [ASCII 00:03:47] characters.
To get a look at all the pods in a cluster, I'll use the all name spaces option of the get command. You can also use the short name PO for pods. Only pods in the Kube system name space are running because this is a fresh cluster. There are still quite a few pods running and it doesn't take long before there can be significantly more.
Besides selecting a specific name space, you can also use labels to filter the output. But how do you know what the labels are, you ask. You can use the show labels option for that. An additional labels column is appended showing all the labels for each pod. If you're only interested in a subset of the labels, you can use the -L option, followed by a comma separated list of label names. For example, if you're only interested in the K8s app label, you can use the -L option, followed by K8s app and the K8s app column is added. Any resources with their value for the label have the value shown in the K8s app column.
If you're only interested in seeing resources with the label defined, you can use the lower case l option. The lower case l is how you filter the output. If you only want the resources with a specific value of a label, you can specify the value after an equal sign. For example, to only show the kube proxy pods, you'd enter K8s app equals kube proxy.
Likewise, you can add an exclamation mark before the equal sign to show all resources not matching the label value. Here the pods that don't have the K8s app label defined showed up again because not having the label defined is a match for not having a specific value of the label.
To hide the pods that don't have the label defined, just add the label name after a comma. You can join as many label queries as you like by joining them with commas.
While we're at it, the sort by option comes in handy for organizing the output of the get command. You can sort by the value of fields in the resources manifest. For example, if you want to sort by the age of the pods, you can sort by metadata.creationTimestamp. You can verify that the age column is now sorted.
The field that you give to sort by is specified using a JSON path. It can be read as sorting by the metadata objects creation timestamp field. Although that works, for more complicated JSON path expressions, it is a good idea to wrap the path in single quotes and braces to avoid shell substitutions and start the path with a period to represent a child of the root JSON object.
This is a better way to specify the JSON path. When writing JSON paths, the root object is actually represented with a dollar sign, but here the dollar sign can be omitted because the expressions are always children of the root object.
You might be wondering how did I come up with the metadata.creationTimestamp path to sort by age. You can use the output format option of get to list all the fields in a resource. The output formats for entire resources can be either JSON or YAML. YAML is more compact so I'll be using that.
Here's an example to output a pod in YAML output format. Notice the output=yaml option. You can also use -O as a short form for output.
With the sort by option, you can sort by any numeric or string value field you find in the output. If you wanted to sort by the podIP address, which is treated as a string, you would give the path .status.podIP. Here's how the get command would look sorting by podIP.
You can trust the sort is performed correctly, but how could you verify it? There is another output format that gives additional information dependent on the type of resource, the wide format. For pods, the wide format includes the podsIP. Here you can verify that the output is indeed sorted by the value in the IP column treating the values as strings.
There is one other output format I want to mention, although there are several more. You've actually used the type of the format before. It is the JSON path format. You can use a JSON path expression to describe what you want to output.
Let's try to output the podIP using the same JSON path expression for the output format. All the output disappeared. There must be something wrong with that JSON path expression. To use JSON path output effectively, you need to understand when the get command is returning a specific resource or a list of resources.
If you specify the name of a resource, for example, the name of a pod, then get will return only that specific pod. In all other cases where you don't specify a specific resource, a list will be returned. When a list is returned, the JSON area that contains all the resources is named items. In our case, no specific pod is identified so the items array needs to be included in the output format JSON path expression. Notice that you need to use square brackets to index the array. The asterisk is a wild card, meaning all of the items in the array.
The output is not as tidy as it is with the YAML or wide output format. To clean it up, you can use a more complex expression that iterates over the items in the array to also include the name of the pod and add new lines. The expression takes some time to understand and it is only included to show you that it is possible to include more than single fields in JSON path output.
For more information about JSON path expressions, see the link to JSON path support in Kubernetes in the transcript of this video. Those tips are really good for viewing what is already in the cluster.
Now it's time to shift the focus to creating new resources in the cluster. The create command is your friend for that. The filename option or in short form -f allows you to create a resource or multiple resources from a manifest file or a directory containing manifest files. There are also several sub commands for creating different types of resources without having to use a manifest file. To see them, just view the create help page.
It's usually better to use manifest files so that you can version control your configuration and practice configuration as code. So why did I mention these shortcuts? You can use sub commands to generate manifest files when paired with the dry run and output format options. A dry run will stop any resource from actually being created and if you set the output to YAML, the output is an equivalent manifest file for the create command you enter.
Let's try it out with a namespace. Here I've used create to generate a manifest file for a namespace named tips. I'll redirect that output to a file in a tips directory. The create command is going to create the resources in filename order. So to ensure the name space is created first, I've used the number 1 in the name to force the order.
The dry run option is available for other commands that create resources as well. Let's say you wanted to create an engine X deployment. You can use the run command to generate a manifest file using the options you provide at the command line. Here, I've set the image to nginx, publish container port 80, set the number of replicas to 2 and expose the deployment with a service using the expose option. The service will use the default type of cluster IP and the service port will be the same as the container port.
If any of those are not what you want, it's now very easy to edit the fleshed out manifest file to customize it as you like. We'll discuss services more later in this course.
For now, let's say we are happy with the defaults except we want to put the resources in the tips namespace. I'll redirect the output to a file prefixed with 2 so that the resources are created after the namespace. Then I'll add the name space to the metadata.
I'll use VI for this which is an alias for Vim on my system. VI is also the default editor for kubecontrol edit, which we will see later on in this course. You can use whatever editor you're comfortable with. If you are preparing for a Kubernetes certification exam, you should be comfortable with the command line editor to save time copying and pasting into the exam notepad area. To learn how to become an expert at Vim, I'd recommend entering vimtutor in a Mac or Linux cell to go through a series of lessons starting from scratch.
Now to set the resources namespace field. The resources will now be created in the tips namespace. To create all the resources, I'll use kubecontrol -f and specify the tips directory. Other commands such as delete support the same pattern of specifying a directory. When you're finished with those resources, you can simply use the same option with the delete command. Now all the resources specified in those manifests have been deleted.
One last technique for quickly creating manifest is to use the get command to modify manifests from existing resources. That might be an obvious technique, but there's an option to help strip out any cluster specific that you don't want to be present in a manifest file such as the pod status and creation time.
As an example, if I get the YAML output for a kube proxy pod and count the number of lines in the output, there are 133 lines of YAML whereas with the export option, there are 92. In this case, export automatically remove 41 lines of YAML for you.
Now to close out the lesson, I have one last tip. Earlier when I was explaining how to craft JSON path sort by expressions, I said you could use YAML or JSON output of the get command to show all the fields of resources. There is actually another way and it's very useful. It can help you with customizing generated manifests as well. It not only gives you the field names of paths, but also tells you the purpose of each fields and other useful information. All of that goodness is bundled up in the kubecontrol explain command.
There are a couple of ways to use explain. They both require you to specify a simple path that is similar to a JSON path, but you give the kind of resource first without a [leading dot 00:14:36] and follow it with the field path that you are interested in.
For example, if you want to see the top level fields of a pod resource, you enter kubecontrol explain pod and the output gives you a description of a pod and the top level fields in a pod. If you wanted to dive into the details of a field that's further down in the hierarchy, let's say a pod specs contain a resources a field, you just join the fields with dots. In this case, pod.spec.containers.resources. You can traverse the fields up and down in this fashion to understand what fields are used for and if you want to see examples, you can navigate to the provided info links when available.
the other way to use explain is to provide the entire schema of a field or resource by using the recursive option. For example, to see all the fields in a pods container field along with their types, you can enter kubectl explain pod.spec.containers and specify the recursive option.
Explain can save you quite a few trips to your search engine of choice when writing resource manifests. During Kubernetes certification exams, you won't be able to use search engines so it's important to know how to get the most out of kubecontrol for exams as well.
All right, that brings us to the end of this lesson and I wanted to cover them first so that you know how to answer any questions you might have about how to quickly create manifest files or to remind yourself about the purpose of different fields. Taking it from the top, we saw how kubecontrol completions help write lengthy commands and prevent spelling mistakes. Remember that if you don't know the exact syntax for enabling completions, the kubecontrol completion help page has you covered.
Then we saw how to use labels to filter and the sort by option to sort with the get command to effectively view resources in a Kubernetes cluster. We finished with some techniques for quickly generating manifest files by outputting YAML from dry run commands and exporting resources from the get command. The explain command can help you customize the manifest files by explaining the schema and the purpose of different fields.
In the next lesson, we will dive into the different tools you have available to you for controlling where pods are scheduled in the cluster. Continue on to the next lesson to learn more about pod scheduling.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Security Specialist (CKS), Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.