Overview of Kubernetes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
The course is part of these learning pathsSee 6 more
Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
The source files used in this course are available here:
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics
This lesson continues to expand upon what we've already learned about Pods. Specifically, we're going to be exploring the details of working with Multi-Container Pods. This is where we really start to hit the good stuff. As we learn more about Multi-Container Pods, we're also gonna be learning about Namespaces and Pod Logs.
We're using a sample application for this lesson. It's a simple application that increments and prints a counter. It's split into 4 containers across 3 tiers. The application tier includes the server container that is a simple Node.js application. It accepts a post request to increment a counter and a get request to retrieve the current value of the counter. The counter is stored in the Redis container which comprises the data tier. The support tier includes a poller and a counter.
The poller container continually makes a get request back to the server and prints the value. The counter continually makes a post request to the server with random values. All the containers use environment variables for configuration and these Docker images are public, so we can reuse them for this exercise.
Let's walk through modeling the application as Multi-Container Pods. We'll start by creating a Namespace for this lesson. Remember that a Namespace separates different Kubernetes resources. Namespaces may be used to isolate users, environments, or applications. You can also use Kubernetes' role-based authentication to manage users as access to resources in a given Namespace. Using Namespaces is a best practice.
So, let's start using them now and we'll continue to use them throughout the remainder of this course. The created, just like any other Kubernetes resource. Here is our Namespace manifest. Namespaces don't require a spec. The main part is the name which is set to microservices and is a good idea to label it as well. Everything in this Namespace will relate to the counter microservices app. So let's create the Namespace in kubectl. With kubectl create -f 3.1.
Use your kubectl commands, either use a --namespace or -n option to specify the Namespace, otherwise the default Namespace will be used. You could also use the kubectl create namespace command but for this course, we're gonna be sticking to manifest. Now out of the Pod, I've named the Pod app. Off the top, I want to mention you can specify namespace in this metadata for this Pod but that makes this manifest slightly less portable because the Namespace can't be overwritten at the command line.
Moving down to the Redis container, we'll use the latest official Redis image. The latest version is chosen to illustrate a specific point. When you use the latest tag in Kubernetes, and it will always pull the image whenever the Pod started. This can introduce bugs, if a Pod restarts and pulls the new latest version without you realizing it.
Prevent always pulling the image in using an existent version, if one exist. You can set the imagePullPolicy field to IfNotPresent. It's useful to know this but in most situations you're better off specifying a specific tag rather than the latest. When specific tags are used, the default imagePull behavior is, IfNotPresent. So, the standard Redis port of 6379 is published with this container.
Now, onto the server container. The server container is straightforward. The image is the public image from this sample application. The tag is used to indicate the microservice within the microservices repository And the server, runs on port 8080, such that it is exposed. The server also requires a REDIS_URL environment variable to connect to the data tier. We can set this in the environment variable sequence.
So how does the server know where to find Redis? Well, because containers in a Pod share the same network stack, a result of which that they all share the same IP address. So, they can reach other containers in the Pod on the local host at their declared container port. The correct host port in this example is localhost:6379 Our imagePullPolicy is admitted because Kubernetes uses IfNotPresent when the explicit tag is given.
We can use the same approach for the counter and poler containers. These containers require the API_URL environment variable to reach the server in the application tier. The correct host port combo for this example is localhost:8080.
Now, let's create the Pod this time but by adding the -n option to set the Namespace for this Pod, such that it's created in a microservices Namespace. Kubectl create -f 3.2 yaml -n microservice. Remember to include the same Namespace option with oq control commands that are relating to the Pod. Otherwise you will be targeting the default Namespace.
If we wanted to get the Pod, we would issue kubectl get -n microservices pod app. The -n namespace option can be included anywhere after kubectl. It doesn't have to be after get, it could be before or after. When you have tab completion enabled. It makes sense to put it earlier, to get to completions for your target namespace.
Let's observe the output, and we'll see a /4 under the status, since we have 4 containers in the Pod. The status also summarize what is going on but it is best to describe the Pod to see what is going on in more detail. Kubectl describe -n microservice pod app. You'll see the event log has more going on now that there are multiple containers.
The same events are being triggered for each container from Pulling to Starting as was the case for a Single-Container Pod is something goes awry. You should check the event log to see what's happening behind the scenes to debug any issue. In this case, everything looks good.
Once the containers are running, we can look at the container logs to see what they're doing. Logs are simply anything that is written to standard out or standard error in the container. The containers need to write messages to standard out or standard error, otherwise nothing will appear in the logs. Kubernetes records the logs and they can be viewed via the logs command the kubectl log command retrieves logs for a specific container in a given Pod. It dumps all of the logs by default or you can use the tail option to limit the number of logs present.
Let's see the most 10 recent logs for the counter container in the app Pod. Was kubectl logs -n microservice app counter --tail 10. Here we can see the counter is incrementing by the count by random numbers between 1 and 10. Let's check the value of the count by inspecting the logs for the poller container. This time we'll use the -n -f to stream the logs in real time, which is short for follow. Kubectl logs -n microservice app poller -f. We can see the count is increasing every second as the counter continues to increment it. That confirms it, our first multi-container application is up and running. Press Control + C to stop following the logs.
In this lesson, we created a Multi-Container Pod that implements a 3-tier application. We use the fact that containers in the same Pod can communicate with one another using local host. We also saw how to get logs from containers running in Kubernetes by using the kubectl logs command. Remember that logs worked by recording what the container writes to standard out and standard error. The logs also allowed us to confirm that the application is working as expected by continuously incrementing that count.
But there are some issues with the current implementation. Because Pods are our smallest union of work, Kubernetes can only scale out by increasing the number of Pods and not the containers inside of the Pod. If we want to scale out the application tier with the current design we have to also scale out all other containers proportionately. This means that there would be multiple Redis containers running, each would have their own copy of the counter. That's certainly not what we're gonna be going for.
It is a much better approach, if we were able to scale each of these services independently. Breaking the application out into multiple Pods and connecting them with services is our ideal implementation. We'll walk through the design in next lesson but before moving on, it's worth noting that sometimes you do want each container in a Pod to scale proportionately. It comes down to how tightly coupled the containers are, and if it makes sense to be thinking of them as a single unit.
With that point out of the way, I'll see you in our next lesson where we will leverage services to break our tightly coupled Pod design into multiple independent Pods.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.