Course Introduction
Overview of Kubernetes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
Course Conclusion
The course is part of these learning paths
See 6 moreKubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
Learning Objectives
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
Intended Audience
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
Prerequisites
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
Source Code
The source files used in this course are available here:
Updates
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics
Up until now, the deployment template has included all of the configuration required by Pod containers. This is a big improvement over storing the configuration inside the binary or container image. Having configuration in the Pod spec, also makes it a lot less portable. Furthermore, if the configuration involves sensitive information, such as passwords, API keys, this also presents a security issue.
So Kubernetes provides us with ConfigMaps and Secrets, which are Kubernetes resources that you can use to separate the configuration from the Pod specs. This operation makes it easier to manage and change configurations. It also makes for more portable manifests. ConfigMaps and Secrets are very similar and used in the same way when it comes to Pods. One key difference is that Secrets are specifically for storing sensitive information. Secrets reduce the risk of their data being exposed. However, the cluster admin also needs to ensure that all the proper encryption and access control safeguards are in place to actually consider Secrets being safe. We'll focus on Secrets and leave out the security details from this introductory course.
Another difference is that Secrets have specialized types for storing credentials, such as requiring to pull images from registries. They also are good at storing TLS private keys and certificates. But I'll refer you to the official documentation we need to make use of those capabilities. ConfigMaps and Secret store data as key value pairs. Pods must reference ConfigMaps or Secrets to use their data. Pods can use the data by mounting them as files through a volume or as environment variables. We'll see examples of these in the demo.
We're going to be using a ConfigMap to configure Redis using a volume to mount a Config file will use and a Secret to inject sensitive environment variables into the app tier. First, let's create a Config namespace for this demo. With key control create -f 10.1.namespace.yaml.
Now let's see how the ConfigMap manifest looks. First notice that there is no spec rather than we have key value payers of the ConfigMap stores and they're under a mapping named data. Here we have a single key named Config. You can have more than one but one is enough for our purpose. The value of Config is a multiline string that represents the file contents of a Redis configuration file. The bar or pipe symbol after ConfigMap is Yaml for starting a multiline string and causes all the following lines to be the value of Config including the Redis Config comment. The configuration files set the TCP keep alive in max memory of Redis. These are arbitrarily chosen for this example. Separating the configuration makes it easy to manage configuration separately from the Pod spec. we will have to make some initial changes to the Pod to make use of the ConfigMap but after that, the two can be managed separately.
Let's take a look at the updated data tier. I'm comparing what the data tier from our probes lesson which doesn't include the persistent volume to avoid not being able to satisfy the persistent volume claim. Starting from the volumes, a new ConfigMap type of volume is added and it references the Redis config ConfigMaps we just saw. Items declare which key value pair we want to use from the ConfigMaps. We only have one in our case, and that is Config.
If you have multiple environments, you could easily do things like referencing a dev configuration in one environment and a production configuration in another. The path sets the path of the file that will be mounted with a Config value. This is relative to the mount point of the volume. Up above in the container spec, the volume mounts mapping declares the use of the Config volume but amounts of that etc Redis. So the full absolute path of the Config path, will be etc Redis, Redis Comf.
The last change that we need is to use a custom command for the container so that Redis knows to load the Config file when it starts. We do that by setting the Redis server, etc Redis comp as the command. With this setup, we can now independently configure Redis without touching the deployment template. As a quick side note, before we create the resources, if we're dealing with the Secret rather than a ConfigMap, the volume type would be Secret rather than ConfigMap. And the name key would be replaced with secret name. Everything else would be the same.
Let's create the resources. Now let's start a shell in the container using Q control exec to inspect the effect of our ConfigMap. Start by cutting out the contents of the etc Redis, Redis Conf file. See that the contents match the ConfigMap value that we specified. Now to prove that Redis actually loaded the Config, we can output TCP, keep alive configuration value and make sure it matches the two 40 value in the file. And there we have it, separation of configuration and Pod spec is complete, so let's exit out of the container.
Before we move on, I wanna highlight how changes to the ConfigMap interact with volumes and deployments. So let's use Kube control edit to update the ConfigMap. And let's change the TCP keep alive value from 240 to 500. Within around a minute, the volume will reflect the change we made to the ConfigMap. That is pretty slick, but Redis only loads the configuration file in start-ups so it won't impact the running Redis process. And because we never updated the deployments template, we'd never triggered a rollout.
So let's confirm the TCP keep alive Redis hasn't been updated using the Redis CLI. There is something to keep in mind when you separate the configuration from the Pod spec to cause the deployments Pods to restart and have Redis supply the new configuration changes, we can use kube control rollout, namespace Config, restart deployment data-tier. This will cause a rollout using the current deployment template, and when the new Pod start, the Redis containers will use the new configuration. We can verify that with the Redis CLI.
Now we can quickly see how secrets work and see the similarities they have with ConfigMap. We will add a secret to the app here using an environment variable. It won't have any functional impact but it will show the idea. Here is our Secret manifest, I mentioned upfront that you usually don't wanna check in secrets to source control given their sensitive nature. It makes more sense to have secrets managed separately. You could still use manifest files as we are here, or the Secret could be created directly with kube control.
The command at the bottom of the file shows how to create the same Secret without a manifest file. Focusing on the manifest file, we can see that itself has a similar structure as our ConfigMap, except for the kind being Secret rather than ConfigMap. And Secrets can use a string data mapping in addition to the data one we use in our ConfigMap. As part of the effort to reduce the risk of Secrets being exposed in plain text, they are stored as base-64 encoded strings and Kubernetes automatically decodes them when using a container. I also have to point out that basics and foreign coding does not really offer any additional security. It's not encrypting the values, and anyone can decode base-64, so to continue to treat the encoded strings as sensitive data.
With that cautionary statement out of the way, the stream data mapping allows you to specify Secrets with first encoding because kubernetes will coding them for you. It's simply a convenience. If you use the data mapping, you must specify in coded values. In the API key secret is the one that we will use in the app tier, but I've included the encoded and decoded key value pairs to illustrate the basics for encoding.
In the data mapping, the encoded value is hello, base-64 encoded. So let's create the Secret to see this. Now, if we describe the Secret, we can only see the keys. The values are hidden as part of the best effort to shield Secret values. We can see what the values are with kubecontrol edit. From here, we can see that string data mapping is not actually stored. The values are based 64 encoded and then added to the data mapping. The decoded value we entered in string data was hello, but now it is the base-64 encoded string beginning with AGV.
Shifting over to the app tier deployment, the API key environment variable is added. If value from mapping is used to reference it from the source for the value. Here, the source is Secret, so the secret key ref is used. If you need to get the environment variable from a ConfigMap rather than a Secret, you would use the ConfigMap key ref instead of Secret key ref. The name is the name of the Secret, and the key is the name of the key and the Secret you want to get the value from.
So let's create the app tier now. And we can use the ENV command and the container dump all the environment variables. We can find the API key variable amid the wash of variables and observe the value as the decoded value that we entered in string data of our Secret manifest file and not encoded value. There is no need to decode the value inside of the container. I'll just mention before wrapping up, that just like with using volumes to reference Secrets or Config maps, you should restart a roll out to how the deployment pods restart with a new version of the environment variables. Environment variables do not update on the flight like volumes. So actively managing the rollout is must.
This concludes the lesson on ConfigMaps and Secrets. So let's recap what we've learned. ConfigMaps and Secrets are used for separating configuration data from Pod specs or what would otherwise be stored in container images. ConfigMaps and Secrets both store groups of key and value data. Secret should be used when storing sensitive data. Both can be accessed in the Pod containers by the referencing them using volumes or environment variables.
This was our last hands-on lesson for course. And the last hands-on lessons have prepped you to get started with managing and deploying applications in Kubernetes. For our next lesson, we're gonna be highlighting some of the areas that the Kubernetes ecosystem is particularly strong with and that I think you should know about. You're almost at the finish line, so join me in our next lesson and we'll cross it together.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.