The course is part of these learning paths
This Administering Kubernetes Clusters course covers the many networking and scheduling objectives of the Certified Kubernetes Administrator (CKA) exam curriculum.
You will learn a range of core practices such as Ninja
kubectl skills, the ability to control where pods are scheduled, how to manage resources for long-lasting production environments, and controlling access to applications in a cluster.
This is a 6 part course made up of four lectures. If you are not familiar with Kubernetes, we recommend completing the Introduction to Kubernetes course and the Deploy a Stateless Application in a Kubernetes Cluster Lab before taking this course.
- Analyze some pro tips on how to effectively use Kubectl. What you learn here will be useful for administering a cluster and using Kubernetes in general.
- Learn to be able to attract or repel pods from nodes or other pods. You can ensure pods run on nodes where they are intended to run and achieve other objectives such as high-availability by distributing pods across nodes.
- Learn to think about using Kubernetes for the long term when you need to consider how you’ll manage and update resources.
- Learn how to control internal and external access to applications running in a Kubernetes cluster.
- Anyone that is interested in Kubernetes cluster administration. But many parts of this course appeal to a broader audience of Kubernetes users.
- Individuals that may benefit from taking this course include System Administrators, DevOps Engineers, Cluster Administrators, and Kubernetes Certification Examinees.
To get the most from this course,
- Have knowledge of the core Kubernetes resources including pods, and deployments.
- Experience using the kubectl command-line tool to work with Kubernetes clusters.
- An understanding of YAML and JSON file formats. You’ll probably already have this skill if you have the prior two. When working with Kubernetes it won’t take long until YAML files make an appearance.
Update - From kubectl version 1.18 the
kubectl run command can no longer be used for creating deployments.
kubectl create deployment or manifest files can be used as alternatives.
How apply calculates differences and merges changes:
Speaker 1: As soon as you make the decision to start using Kubernetes as part of your application infrastructure, you need to think about how you will manage resources. Until now, we haven't really had to worry about managing and maintaining the resources, since we knew we'd delete the resources after illustrating a concept with an example. There is a lot more to consider when using Kubernetes in production, and it influences everything, right down to the commands you choose to update resources with KubeCTL. This lesson explains different models for managing and updating resources with KubeCTL and helps you decide which model to use.
This lesson will explain the three different resource management paradigms provided by KubeCTL and demonstrations will illustrate the concepts in practice. You will learn about the trade-offs between the different resource management paradigms and the KubeCTL commands that you can use to follow each paradigm. In the imperative resource management paradigm, you as the administrator issue commands describing what you want to happen in the cluster. The commands that you use are characterized by being specialized to specific Kubernetes resources for creating and updating.
In KubeCTL, the imperative commands are named after verbs, to allow the user to express what they want without necessarily having to know the resources involved. For example, you don't need to know anything about deployments to understand that the run command will run an application.
Now let's think about some advantages and disadvantages of the imperative paradigm. It is the easiest way to get started with Kubernetes. You don't have to know anything about kinds of Kubernetes resources or Kubernetes APIs. Some down sides of the imperative model are that there is no embedded history of the changes being made in the cluster. This creates issues with team and doesn't provide any easy escape mechanism when a change introduces a problem. Another drawback is that there are quite a few different verb-driven commands, and commands you issue can end up being quite lengthy and error-prone.
In addition, although there are many specialized commands and command line options, they still cannot express all the capabilities of resources, so eventually you need to use more generic commands and learn about Kubernetes resources. It's best to use the imperative model when you need to try something out quickly, and when you're first starting. It is not something that you should use in production.
The second resource management paradigm is imperative with configuration files. The configuration files are manifest files and are usually in YAML format, but can also be written in JSON. Instead of having many specialized commands, like with the imperative paradigm, there are only a few when you use imperative with config files. They are create, replace, and delete. Create is also used with the imperative paradigm, but what makes it different here is that all commands specify file with the file name option that can be shortened to dash F.
The advantages of using the imperative model with configuration files are that you can leverage version control systems to track the history of changes and practice configuration as code. This is a big reason for preferring to use manifest files. There are also fewer commands to use, and remember, on the disadvantage side, you need to understand resource schemas. You also need to perform the extra step of making files, although the benefits of version control certainly outweigh this drawback. A more sinister drawback is that some resources are partially managed by Kubernetes. For example, load balancer services that have external IPs are managed automatically by the cluster.
If you were to update a resource that is partially managed by the cluster, you run the risk of losing the configuration added by Kubernetes. To avoid this, you have to copy the added configuration into your manifest file before running the update. This same problem can arise if someone used an imperative command to update a resource. The next time you update with the config file, if you don't copy the configuration change to the manifest file, the previous update would be lost. The update mechanism treats the configuration file as the single source of truth and ignores the configuration changes made outside of the configuration file.
Lastly, the commands are not well suited for directories and work best with individual files. If you were to add a new file to a directory and use create to create the resource, you'd see errors for all of the resources that were already created in that directory. It works best with single files for each tightly coupled group of resources. The imperative with configuration files paradigm can be used in production and works best with a small team and a small number of files for each application.
The final resource management paradigm is declarative with configuration files. As the name suggests, manifest files are used to describe resource changes. Where it differs from the imperative with configuration files paradigm is that you don't specify the operation to take. This is the essential feature of declarative paradigms. Instead of specifying to update a given resource, KubeCTL determines what operation is required for each resource by analyzing the differences between the live resource configuration and the contents of the configuration file. The operations to perform are determined at the resource level. This means one command can result in creating, updating, and deleting resources in a file.
Because configuration files are used, you gain the benefits of version control. Another advantage of the declarative model is that you can do everything with a single command. You don't need separate commands for each operation. By analyzing the differences between live configurations and configuration files, the declarative model will preserve changes that aren't captured by configuration files. The declarative model works well with directories since each operation is determined at the resource level. Live resources that match their configuration file do not require any operation at all.
The drawbacks of the declarative with file configuration file paradigm are that it requires understanding Kubernetes resources and APIs. That was also true for the imperative with configuration files paradigm, but the declarative model is the most difficult to understand of the three. Most of the time you can trust that it is doing the right thing, but when an issue does occur, it can be difficult to understand what happened. The merge operations that happen behind the scenes are not as simple as always replacing entire resources. The declarative with configuration file paradigm requires the most knowledge of Kubernetes resources and APIs, but may be the best choice in production when directories of files are required.
It also avoids the risk of configuration loss that was possible with the imperative with configuration file paradigm. It's worth noting that you can change from one management model to another, and we will see that in the demo. Whatever model you are using, it's important to not carelessly mix commands intended for the other models, or you can have subtle issues creep into your cluster. The following demos will illustrate each of the different models with a deployment resource and show how to migrate from one model to another.
We'll start with the imperative paradigm. Some examples of imperative commands for creating resources are run to create deployments or jobs, and expose to create services. In the imperative paradigm, you only need to know that run will run an application, not that run creates a Kubernetes deployment resource. Some examples of imperative commands to perform updates are scale, to increase the number of replicas, and label.
Let's run nginx in the default name space. In other words, create an nginx deployment using an imperative command. But before I hit enter, I'll list all the available completions to show how many different options there are, potentially creating some very messy commands. However, the minimum that you need to know to use run is the image you want to create a container from. You don't need to know anything about pods or deployments. But there aren't enough specific verb commands to express everything you might want to do without understanding a bit about Kubernetes resources.
The next level up is the create command, followed by the type of resource to create. Note that not all resources can be created this way. In a similar manner, there aren't single verbs for all the possible changes you would want to have in a cluster, so there is the set command, which includes subcommands for common updates you might perform. Let's use set to request a quarter of a CPU for the nginx containers. In the command, you do need to know that it is a deployment. It doesn't take long before you have to understand some basic resources when using the imperative model.
if the set command doesn't provide what you need, you need to resort to using the patch or edit commands. Both require knowledge of resource specifications. Patch is the more advanced of the two, allowing you to merge patches into live resources. A patch is just a partial spec that specifies changes you want. Patches can be written in YAML or JSON. There are some complexities around patches though, such as when you patch an array, do you want to append elements to the array or replace the entire contents of the array. This gives rise to the different kinds of patches that can be performed.
The edit command is more straightforward. Let's see how edit works. Edit opens an editor which defaults to VI in Linux. You can make changes to the spec of the live resource. If you make any changes, it will apply them once you've saved the file. Let's set the memory request to 100 mebibytes, and upon saving and quitting, the deployment is updated. Edit is very close to using imperative with configuration files, but without the benefits of version controlling the configuration files. The imperative commands may be most useful for generating manifest files, as we have seen in an earlier lesson. They may also allow you to accomplish some updates faster if you are in a timed Kubernetes certification exam environment.
To migrate to an imperative with configuration file paradigm, you use the get command with your preferred output format, either YAML or JSON, and the export option. Now you update the resource by modifying the file. Let's change the image pull policy to always, because later we will change the image's tag to latest. In using the replace command with the dash F option, we can update the deployment following the imperative with configuration files paradigm. Recall that one of the drawbacks of this model is the potential for configuration loss. I'll quickly demonstrate this.
First I'll update the image for the deployment to nginx latest using the set command. And let's verify the deployment's images, nginx latest. That confirms it. Now if we use replace with the existing configuration file and check the image, we can see that the replace overwrote the work of the set command, and would happily overwrite any other changes not reflected in the configuration file without a warning. This also illustrates why you shouldn't mix resource management models, but remember that sometimes the changes in the configuration are implemented by Kubernetes itself and the issue can't entirely be removed by strictly adhering to the resource management model. If you want to gracefully handle differences between the live configuration and the configuration file, you need to use the declarative model.
To migrate from the imperative with configuration file model to the declarative model, you can use the save config option of create, replace, or update. We will demonstrate what the save config option does. Let's take a look at the annotations of the deployment before we use save config. Currently, there is only a revision annotation that corresponds to the revision number for the current deployment roll out. Remember that annotations are similar to labels, in that they store meta data about a resource. But annotations can't be used to filter resources like labels can.
Now we'll use the replace command with the save config option to prepare to manage the resource using the declarative model. Now if we look at the annotations, we can see a new last applied configuration annotation. For now, you have to take my word that the annotation is actually a JSON representation of the YAML configuration file. This is the key to the declarative model, being able to compare the last applied configuration file, the live resource configuration, and a new configuration. The apply command of KubeCTL is the command that is used when practicing the declarative model. Apply is able to perform a three-way diff of the last configuration, the live configuration, and a new configuration file to perform the desired updates but not overwrite updates that aren't specified in the configuration file. The apply command has subcommands for working with the last applied configuration notation. We can use the view last applied subcommand to get a YAML version of the annotation, and that'll redirect the output to a file named lastapplied.YAML. Now I'll use the diff command to compare the last applied configuration with our deployment.YAML file, to confirm that the annotation is simply the configuration file.
If there were any differences, the diff command would have displayed some output. To apply a change, you only need to edit the file and pass it to apply. Let's change the image tag to latest and pass it to apply using the dash F option. Apply will perform its three-way diff analysis to decide what needs to change. Let's try another example. I will remove the number of replicas from the configuration file. Now the configuration file is not controlling the number of replicas. Then I'll use the scale command to increase the number of replicas to two, and confirm the deployment now has two replicas.
At this point, let's note that the scale command does not impact the last applied configuration. The last applied configuration still has replicas at one, which was the case the last time apply was executed. If we execute apply again and check the number of replicas on the deployment, we see that it reverted to one. That is because the applied diff comparison decides that when a primitive field, like the energy replica field, is in a last applied configuration and not in a new configuration file, it sets the field to its default value. This is because it thinks that removing the field from the file means you want to clear the value that was set in the previous configuration. This may not be what's intended. In our case, we wanted to set the replicas to two. This illustrates the kind of issue that can happen when you mix management models. The correct way to remove replicas from the configuration file would have been to remove it from the file, then apply the changes, then use the scale command, since the last applied configuration would no longer have any value set for the replicas.
You can also use the edit last applied command to remove the field from the annotation. Any time you remove a field from your configuration file, you should carefully consider the impact. You can also try to remove fields that you want to allow to be managed outside of the configuration file when you first create a resource.
I have added a link to this video's transcript that explains the complete logic behind the three-way diff decision making process. Now let's clean everything up. To do that, the apply command provides the prune option for deleting resources in a declarative way. It detects resources to delete by finding resources that have a last applied configuration annotation, but that no longer have a corresponding manifest file. This is a risky maneuver. If you have a configuration directory with several subdirectories and someone accidentally calls prune on a subdirectory, it can end up wiping out everything outside of that subdirectory. Because of this, it's usually best to use the delete command. The delete command is more explicit, and the preferred way to delete resources for all of the resource management paradigms.
Let's quickly recap what we covered in this lesson. The imperative management paradigm is best suited for getting started with Kubernetes and in low-risk development environments when you are quickly trying things out. The imperative with configuration files paradigm gains the benefit of version control, including collaborating with a team. You can use this model in production. The declarative with configuration files paradigm can also be used in production, and is better suited for larger projects with many resources. The analyze command will decide what operations to perform based on a three-way diff analysis between the last applied configuration, the live resource configuration, and the new configuration file. It works well, and can avoid the configuration loss issue if you carefully decide which fields to include in the configuration file. Remember to stick with whatever model you choose, or be very deliberate when mixing models to avoid the pitfalls.
This lesson helps to clarify how to think about managing resources in Kubernetes. There are many KubeCTL commands for creating and updating resources. Armed with the knowledge of different resource management paradigms, you now understand when to use each. In the next lesson, we'll look at a few topics related to networking in Kubernetes. When you're ready, start the next lesson to learn more about Kubernetes networking.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.