1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Introduction to Kubernetes

ConfigMaps and Secrets

Contents

Course Introduction
1
Introduction
PREVIEW4m 6s
Deploying Containerized Applications to Kubernetes
6
Pods
11m 34s
7
Services
5m 10s
10
11
13
Probes
8m 26s
15
Volumes
11m 42s
The Kubernetes Ecosystem
Course Conclusion
18
Start course
Overview
DifficultyBeginner
Duration1h 58m
Students3554
Ratings
4.4/5

Description

Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

The source files used in this course are available in the course's GitHub repository.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers

Prerequisites

You should be familiar with:

  • Working with Docker and be comfortable using it at the command line

Updates

August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics

 

Transcript

Up until now the deployment template has included all of the configuration required by pod containers. This is a big improvement over storing the configuration inside the binary or container image which makes it difficult to reuse. Having configuration in the pod spec also makes it less portable. Furthermore, if the configuration involves sensitive information such as passwords or API keys, then it also presents a security issue. 

 

Kubernetes provides ConfigMaps and Secrets are Kubernetes resources that you can use to separate configuration from pod specs. This separation makes it easier to manage and change configuration. It also makes for more portable manifests. ConfigMaps and secrets are very similar and used in the same way when it comes to pods. One difference is that secrets are specifically for storing sensitive information. Secrets reduce the risk of their data being exposed. However the cluster administrator also needs to ensure all the proper encryption and access control safeguards are in place to really consider secrets being safe. We’ll focus on using secrets and leave out the security details from this introductory course. Another difference is that secrets have specialized types for storing credentials required to pull images from registries, and to store TLS private keys and certificates but I’ll refer you to the official documentation when you need to make use of those capabilities.

 

ConfigMaps and secrets store data as key-value pairs. Pods must reference configmaps or secrets to use their data. Pods can use the data by mounting them as files by using a volume or as environment variables. We’ll see examples of these in the demo. We’ll use a configmap to configure redis using a volume to mount a config file and we’ll use a secret to inject a sensitive environment variable into the app tier.

 

First let’s create a config namespace for this demo.

kubectl create -f 10.1-namespace.yaml

 

Now let’s see how the configmap manifest looks. First notice there is no spec, rather the key-value pairs that the configmap stores are under a mapping named data. Here we have a single key named config. You can have more than one but one is enough for our purpose. The value of config is a multi-line string that represents the file contents of a redis configuration file. The bar or pipe symbol after config is YAML for starting a multi-line string and causes all of the following lines to be the value of config including the Redis config comment. The configuration file values set the tcp keepalive and maxmemory of redis. These are arbitrarily chosen for this example. Separating the configuration makes it easy to manage configuration separately from the pod spec. We will have to make some initial changes to the pod to make use of the configmap but after that the two can be managed separately. 

 

Let’s take a look at the updated data-tier. I’m comparing with data tier from our probes lesson which doesn’t include the persistent volume to avoid not being able to satisfy the persistent volume claim. Starting from the volumes, a new configmap type of volume is added and it references the redis-config configMap we just saw. Items declares which key value pair we want to use from the config map. We only have one in our case and that is config. If you have multiple environments, you could easily do things like referencing a dev configuration in one environment and a production configuration in another. The path sets the path of the file that will be mounted with the config value. This is relative to the mount point of the volume. Up above in the container spec the volumeMounts mapping declares the use of the config volume and mounts it at /etc/redis. So the full absolute path of the config path will be /etc/redis/redis.conf. The last change that we need is to use a custom command for the container so that redis knows to load the config file when it starts. We do that by setting redis-server /etc/redis/redis.conf as the command. With this setup we can now independently configure redis without touching the deployment template.

 

As a quick sidenote before we create the resources, If we were dealing with a secret rather than a configmap the volume type would be secret rather than configMap and the name key would be replaced with secretName. Everything else would be the same. 

 

Let’s create the resources 

kubectl create -n config -f 10.2 -f 10.3

 

Now let’s start a shell in the container using kubectl exec to inspect the effect of our configmap

 

kubectl exec -n config data-tier-... -it /bin/bash

 

Start by catting out the contents of the /etc/redis/redis.conf file

cat /etc/redis/redis.conf

See that the contents match the configmap value that we specified. Now to prove that redis actually loaded the config we can output tcp-keepalive configuration value to make sure it matches the 240 value in the file.

 

redis-cli CONFIG GET tcp-keepalive

And there we have it. Separation of configuration and pod spec is complete.

Let’s exit out of the container.

exit

 

Before we move on I want to highlight how changes to configmaps interact with volumes and deployments. Let’s use kubectl edit to update the configmap

 

kubectl edit -n config configmaps redis-config

 

And change the tcp-keepalive value from 240 to 500

:%s/240/500/

:wq

 

And watch the redis config file mounted in the container

watch kubectl exec -n config data-tier-... cat /etc/redis/redis.conf

Within around a minute the volume will reflect the change we made to the configmap. That is pretty slick but redis only loads the configuration file on startup so it won’t impact the running redis process. And because we never updated the deployment’s template we never triggered a rollout. Let’s confirm the tcp-keepalive redis is using hasn’t been updated using the redis cli CONFIG GET tcp-keepalive command via kubectl exec 

kubectl exec -n config data-tier-... redis-cli CONFIG GET tcp-keepalive

 

That is something to keep in mind when you separate the configuration from the pod spec. To cause the deployment’s pods to restart and have redis apply the new configuration changes we can use 

kubectl rollout -n config restart deployment data-tier

This will cause a rollout using the current deployment template and when the new pods start, the redis containers will use the new configuration. We can verify that using the redis cli with kubectl exec again

kubectl exec -n config data-tier-... redis-cli CONFIG GET tcp-keepalive

 

Now we can quickly see how secrets work and see the similarities they have with configmaps. We will add a secret to the app tier using an environment variable. It won’t have any functional impact but it will show the idea.

Here is our secret manifest. I’ll mention up front that usually you don’t want to check in secrets to source control given their sensitive nature. It makes more sense to have secrets managed separately. You could still use manifest files, as we are here, or the secret could be created directly with kubectl. The command at the bottom of the file shows how to create the same secret without a manifest file.

Focusing on the manifest itself we can see a similar structure as a configmap except for the kind being secret rather than configmap and secrets can use a stringdata mapping in addition to the data one we used in our configMap. As part of the effort to reduce the risk of secrets being exposed in plaintext, they are stored as base-64 encoded strings. Kubernetes automatically decodes them when used in a container. I have to point out that base-64 encoding does not really offer any additional security. It is not encrypting the values. Anyone can decode a base64 string so continue to treat the encoded strings as sensitive data. With that cautionary statement out of the way, the stringData mapping allows you to specify secrets without first encoding them because Kubernetes encode them for you. It is simply a convenience. If you use the data mapping you must specify encoded values. The api-key secret is the one that we will use in the app-tier but I’ve included the encoded and decoded key value pairs to illustrate the base64 encoding. In the data mapping the encoded value is hello base64 encoded.

 

Let’s create the secret to see this

kubectl create -f 10.4.yaml -n config 

Now if we describe the secret we can only see the keys, 

kubectl describe -n config secret app-tier-secret

the values are hidden as part of the effort to shield secret values. However, we can see the values that are stored with kubectl edit

kubectl edit  -n config secrets data-tier-secret 

From here we can see that stringdata mapping is not actually stored. The values are base64 encoded and added to the data mapping. 

The decoded value we entered in string data was hello but now it is the base64 encoded string beginning with aGV. We can quit the edit by entering colon q

 

Shifting over to the app tier deployment, the an API_KEY environment variable is added. A valuefrom mapping is used to reference a source for the value. Here the source is secret so the secretkeyref is used. If you needed to get the environment variable value from a configmap rather than a secret you would use configMapKeyRef instead of secretKeyRef. The name is the name of the secret and key is the name of the key in the secret you want to get the value from.

 

Let’s create the app tier

kubectl create -f 10.5.yaml -n config

And we can use the env command in the container to dump all of the environment variables

kubectl exec -n config app-tier-... env

We can find the API_KEY variable amid the wash of variables and observe the value is the decoded value that we entered in the stringdata of our secret manifest and not an encoded value. There is no need to decode inside the container.

 

I’ll just mention before wrapping up that just like with using volumes to reference secrets or configMaps, you should restart a rollout to have the deployment’s pods restart with the new version of environment variables. Environment variables do not update on the fly like volumes did so actively managing the rollout is really a must.

 

This concludes our lesson on configMaps and secrets. Let’s recap what we learned. 

ConfigMaps and secrets are used for separating configuration data from pod specs or what would otherwise be stored in container images.

ConfigMaps and secrets both store groups of key and value data. Secrets should be used when storing sensitive data. 

Both can be accessed in pod containers by either referencing them using volumes or environment variables. 

This was our last hands-on lesson in the course. The hands-on lessons have prepared you to get started with managing and deploying applications on Kubernetes. 

 

The next lesson highlights some of the areas of the kubernetes ecosystem that I think you should know about. You are almost at the finish line. Join me for the next lesson and we'll cross it together.

About the Author

Students36591
Labs97
Courses11
Learning paths7

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.

Covered Topics