The course is part of these learning paths
See 1 moreWith many enterprises making the shift to containers, along with Kubernetes being the leading platform for deploying container applications; learning Kubernetes Patterns is more essential than ever for application developers. Having this knowledge across all teams will lead to yielding the best results, for developers in particular.
This Kubernetes Patterns for Application Developers Course covers many of the configuration, multi-container pods, and services & networking objectives of the Certified Kubernetes Application Developer (CKAD) exam curriculum.
Help prepare for the CKAD exam with this course, comprised of 6 expertly-instructed lectures.
Learning Objectives
- Understand and use multi-container patterns to get the most out of Kubernetes Pods
- Learn how to control network access to applications running in Kubernetes
- Understand how Kubernetes Service Accounts provide access control for Pods
- Use the Kubernetes command-line tool kubectl to effectively overcome challenges you may face when working with Kubernetes
Intended Audience
This course is intended for application developers that are leveraging containers and using or considering using Kubernetes as a platform for deploying applications. However, there are significant parts of this course that appeal to a broader audience of Kubernetes users. Some individuals that may benefit from taking this course include:
- Application Developers
- DevOps Engineers
- Software Testers
- Kubernetes Certification Examinees
Prerequisites
We would recommend that to successfully navigate this course it would be helpful to have:
- Knowledge of the core Kubernetes resources including Pods, and Deployments
- Experience using the kubectl command-line tool to work with Kubernetes clusters
- An understanding of YAML and JSON file formats
Related Training Content
This course is part of the CKAD Exam Preparation Learning Path.
Update - From kubectl version 1.18 the kubectl run
command can no longer be used for creating deployments. kubectl create deployment
or manifest files can be used as alternatives. Also the --export
option of kubectl get
is no longer supported resulting in functional, but more verbose output manifests.
Welcome back. If you spend any amount of time working with a Kubernetes cluster, you'll probably be issuing a lot of kubectl commands, this is certainly the case during Kubernetes certification exams, kubectl has features that can help you overcome many challenges you might face. This lesson is intended to give you some tips to increase your efficiency and get the most out of kubectl. This lesson will demonstrate enabling auto-completions for kubectl or Kube Control to up your productivity at the command line, how to get the most our of kubectl's Get command, quick ways to generate resource manifest files with kubectl and how to use kubectl to give you information about resource specification fields in manifest files.
I'll be using a Kubernetes cluster I've stood up on Linux nodes running in AWS. The cluster is the same as you use in Cloud Academy lab environments and is also very similar to clusters in Kubernetes certification exam environments. You can follow along using any type of cluster however, single node clusters spun-up using Minikube or enabling Kubernetes in Docker for Mac or Docker for Windows Distributions will work just fine for this lesson. Now let's get started. Anytime you will be using more than a few commands with kubectl, you will probably enjoy having command completions enabled, it is not very difficult to do, but if you ever forget, kubectl can tell you how, all you really need to remember is that entering kubectl by itself lists all of the available commands and completion is one of them. To display the commands for enabling completions for different systems and shells, add the help option to the completion command, you can also use -h as a short form instead of spelling out help.
With kubectl it is quite common to have examples and command help pages, it is tempting to hop over to your favorite search engine when you forget how to do something, but kubectl has a lot of answers as long as you know how to get at them. I'm using Linux and the bash shell, so this source command is what I need to enable completions for the current shell. To have completions enabled automatically every time a shell is created, add the command to your .bash_profile file, now you can easily have commands auto-completed by pressing Tab or list the available commands and options by pressing Tab twice if there isn't a single completion for what you've entered. If I enter kubectl followed by Tab twice, the available commands for kubectl are displayed. If I type g, followed by Tab, the only command starting with a g, get, is completed. Completions will save you a fair amount of time and prevent typos, this is always useful, but especially if you are taking one of the time-limited Kubernetes certification exams. The get command is your go-to command for displaying resources in Kubernetes, I'll press Tab twice to show the resources that can be shown with get. Let's say we're interested in nodes, I'll enter nodes to display all the nodes in the cluster.
With completions enabled, it is easy to enter the names of resources with some Tab magic, but you can also make use of the short names for resources. To list the short names for resources, you can enter kubectl api-resources. The first column lists the full name and the second lists any short name, if there is one, so if you aren't a fan of typing nodes, feel free to simply enter no, it's more beneficial, considering the length of some resources, such as certificate signing requests, which can be compressed down to CSR, a whopping savings of 23 ASCII characters. To get a look at all the pods in a cluster, I'll use the all namespaces option of the get command, you can also use the short name p-o for pods. Only pods in the Kube system namespace are running, because this is a fresh cluster, there are still quite a few pods running and it doesn't take long before there can be significantly more. Besides selecting a specific namespace, you can also use labels to filter the output, but how do you know what the labels are, you ask? You can use the show-labels option for that.
An additional labels column is appended showing all the labels for each pod, if you are only interested in a subset of the labels, you can use the -L option, followed by a comma-separated list of label names. For example, if you are only interested in the k8s-app label, you can use the -L option, followed by k8s-app and the k8s-app column is added. Any resources with the value for the label have the value shown in the k8s-app column, if you are only interested in seeing resources with the label defined, you can use the lowercase l option. The lowercase l is how you filter the output and if you only want the resources with a specific value of a label, you can specify the value after an equal sign. For example, to only show the kube-proxy pods, you'd enter k8s-app=kube-proxy.
Likewise, you can add an exclamation mark before the equal sign to show all resources not matching the label value, here, the pods that don't have the k8s-app label defined showed up again because not having the label defined is a match for not having a specific value of the label. To hide the pods that don't have the label defined, just add the label name after a comma, you can join as many label queries as you like by joining them with commas. While we're at it, the sort by option comes in handy for organizing the output of the get command, you can sort by the value of fields in the resources manifest. For example, if you want to sort by the age of the pods, you can sort by metadata.creationTimestamp, you can verify that the age column is now sorted, the field that you give to sort by is specified using a json path, it can be read as sorting by the metadata object's creation timestamp field.
Although that works, for more complicated json path expressions it is a good idea to wrap the path in single quotes and braces to avoid cell substitutions and start the path with a period to represent a child of the root json object, this is a better way to specify the json path. When writing json paths, the root object is actually represented with a dollar sign, but here, the dollar sign can be omitted, because the expressions are always children of the root object. You might be wondering, how did I come up with the metadata.creationTimestamp path to sort by age? You can use the output format option of get to list all the fields in a resource, the output formats for entire resources can be either json or yaml, yaml is more compact, so I'll be using that. Here's an example to output a pod in yaml, output format, notice the output=yaml option, you can also use -o as a shortform for output. With the sort by option, you can sort by any numeric or string value field you find in the output, if you wanted to sort by the pod IP address, which is treated as a string, you would give the path .status.podIP, here's how the get command would look sorting by pod IP. You can trust the sort is performed correctly, but how can you verify it? There's another output format that gives additional information dependent on the type of resource, the wide format. For pods, the wide format includes the pod's IP and if you knew you can verify that the output is indeed sorted by the value in the IP column, treating the values as strings. There is one other output format I want to mention, although there are several more. You've actually used the type of the format before, it is the json path format, you can use a json path expression to describe what you want to output.
Let's try to output the pod IP using the same json path expression for the output format. Hmm, all the output disappeared, it must be something wrong with that json path expression. To use json path output effectively, you need to understand when the get command is returning a specific resource or a list of resources, if you specify the name of a resource, for example, the name of a pod, then get will return only that specific pod. In all other cases, where you don't specify a specific resource, a list will be returned, when the list is returned, the json array that contains all the resources is named items. In our case, no specific pod is identified, so the items array needs to be included in the output format json path expression. Notice that you need to use square brackets to index the array, the asterisk is a wildcard, meaning all of the items in the array. The output is not as tidy as it is with the yaml or wide output format, to clean it up, you can use a more complex expression that iterates over the items in the array to also include the name of the pod and add new lines.
The expression takes some time to understand and it is only included to show you that it is possible to include more than single fields in json path output. For more information about json path expressions, see the link to json path support in Kubernetes in the transcript of this video. Those tips are really good for viewing what is already in the cluster, now it's time to shift the focus to creating new resources in the cluster. The create command is your friend for that, the final name option or in short form, -f, allows you to create a resource or multiple resources from a manifest file or a directory containing manifest files. There are also several sub-commands for creating different types of resources without having to use a manifest file, to see them, just view the Create Help page. It's usually better to use manifest files so that you can version control your configuration and practice configuration as code. So why did I mention these shortcuts? You can use sub-commands to generate manifest files when paired with the dry run and output format options, a dry run will stop any resource from actually being created and if you set the output to yaml, the output is an equivalent manifest file for the create command you enter, let's try that with a namespace.
Here, I've used create to generate a manifest file for a namespace named tips, I'll redirect that output to a file in a tips directory. The create command is going to create the resources in filename order. So to ensure the namespace is created first, I've used the number one in the name to force the order, the dry run option is available for other commands that create resources as well. Let's say you want to create an nginx deployment, you can use the run command to generate a manifest file using the options you provide at the command line. Here, I've set the image to nginx, published to container port 80, set the number of replicas to two and exposed the deployment with a service using the expose option. The service will use the default type of cluster IP and the service port will be the same as the container port. If any of those are not what you want, it's now very easy to edit the flushed out manifest file to customize it as you like. We'll discuss services later in this course. For now, let's say we are happy with the defaults, except we want to put the resources in the tips namespace, I'll redirect the output to a file prefixed with two, so that the resources are created after the namespace, then I'll add the namespace to the metadata, I'll use VI for this, which is an alias for Vim on my system. VI is also the default editor for kubectl edit, which we will see later on in this course. You can use whatever editor you're comfortable with, if you are preparing for a Kubernetes certification exam, you should be comfortable with the command line editor to save time copying and pasting into the exam notepad area. To learn how to become an expert at Vim, I'd recommend entering vimtutor in a Mac or Linux cell to go through a series of lessons starting from scratch.
And now, to set the resources namespace field. The resources will now be created in the tips namespace, to create all the resources, I'll use kubectl -f and specify the tips directory. Other commands, such as delete, support the same pattern of specifying a directory, so when you're finished with those resources, you can simply use the same option with the delete command and now all the resources specified in those manifests have been deleted. One last technique for quickly creating manifests is to use the get command to modify manifests from existing resources. That might be an obvious technique, but there is an option to help strip out any cluster specific, that you don't want to be present in a manifest file, such as the pod status and creation time. As an example, if I get the yaml output for a kube-proxy pod and count the number of lines in the output, there are 133 lines of yaml, whereas with the export option, there are 92, in this case, export automatically removed 41 lines of yaml for you. Now to close out the lesson, I have one last tip. Earlier when I was explaining how to craft json path's sort by expressions, I said you could use yaml or json output of the get command to show all the fields of resources.
There is actually another way and it's very useful, it can help you with customizing generated manifests as well. It not only gives you the field names of paths but also tells you the purpose of each field and other useful information, all of that goodness is bundled up into the kubectl explain command. There are a couple of ways to use explain, they both require you to specify a simple path that is similar to a json path, but you give the kind of resource first without a leading dot and follow it with the field path that you are interested in. For example, if you want to see the top-level fields of a pod resource, you enter kubectl explain pod and the output gives you a description of a pod and the top-level fields in a pod. If you wanted to dive into the details of a field that's further down in the hierarchy, let's say, a pod specs container resources field, you just join the fields with dots, in this case, pod.spec.containers.resources. You can traverse the fields up and down in this fashion to understand what fields are used for and if you want to see examples, you can navigate to the provided info links when available. The other way to use explain is to provide the entire schema of a field or resource by using the recursive option.
For example, to see all the fields in a pod's container field along with their types, you can enter kubectl explain pod.spec.containers and specify the recursive option. Explain can save you quite a few trips to your search engine of choice when writing resource manifests. During Kubernetes certification exams, you won't be able to use search engines, so it's important to know how to get the most out of kubectl for exams as well. All right, that brings us to the end of this lesson. There were quite a few tips in there and with them, you should feel confident about using kubectl to overcome any challenges you might have, such as generating manifest files or explaining specification fields. Taking it from the top, we saw how kubectl completions help write lengthy commands and prevents spelling mistakes. Remember that if you don't know the exact syntax for enabling completions, the kubectl completion help page has you covered.
Then you saw how to use labels to filter and the sort by option to sort with the get command to effectively view resources in a Kubernetes cluster. We finished with some techniques for quickly generating manifest files by opening yaml from dry run commands and exporting resources from the get command. The explain command can help you customize the manifest files by explaining the schema and the purpose of different fields. We'll wrap up the course in the next lesson, continue on when you're ready.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Security Specialist (CKS), Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.