CloudAcademy
  1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Multi Container Pods and Service Discovery

The course is part of these learning paths

Certified Kubernetes Administrator (CKA) Exam Preparation
course-steps 4 certification 2 lab-steps 6
Introduction to Kubernetes
course-steps 1 certification 1 lab-steps 2

Contents

keyboard_tab
Course Introduction and Overview
Production and Course Conclusion
10
play-arrow
Start course
Overview
DifficultyAdvanced
Duration1h 28m
Students2496

Description

Introduction 
This course provides an introduction to how to use the Kubernetes service to deploy and manage containers. 

Learning Objectives 
Be able to recognize and explain the Kubernetes service 
Be able to explain and implement a Kubernetes container
Be able to orchestrate and manage Kubernetes containers 

Prerequisites
This course requires a basic understanding of cloud computing. We recommend completing the Google Cloud Fundamentals course before completing this course. 

Transcript

Hello and welcome back to the Introduction to Kubernetes course for Cloud Academy. I am Adam Hawkins and I'm your instructor for this lesson. The last lesson introduced pods and services. It seems like we're hitting our stride now and making progress. It feels good, right?

This lesson covers multi-container pods and service discovery. This is where we really start to hit the good stuff. In this lesson, we're covering namespaces, multi-container pods, inter-pod networking, service discovery and fetching logs.

We're using a sample application for this lesson. It's a simple application that could increment and print a counter. There are three containers. The server container is a simple node.js application. It accepts a post request to increment a counter and a get request to retrieve the counter. The counter is stored in redis. The poller container continually makes a get request back to the server and prints the value. The counter continually makes a post request to the server with random values. All the containers use environment variables for configuration. Also, the Docker images are public so we can reuse them for this exercise.

Here you can see the application running with Docker Compose. The counter container constantly updates the value while the poller prints it. This application actually has four containers. There's redis, the server, the counter and the poller. The server depends on redis and the counter and poller depend on the server. Let's walk through modeling this application with pods. We'll start by creating a namespace for this lesson. Remember that a namespace separates different Kubernetes resources. Namespaces may be used to isolate users, environments or applications. You can also use Kubernetes role-based authentication to manage users access rights to a resource in a given namespace. Using namespaces is best practice. Let's start using namespaces now and we'll continue throughout the remainder of the course. They're created just like any other Kubernetes resource.

Create a new namespace.yml file with your editor. Set the API version, kind and metadata. Namespaces only require a name. Set the name to lesson201 and label it as well. Save the file and create it with kubectl. Future kubectl commands will use a dash dash namespace option. Now on to the pod.

Create a new pod.yml file in your editor. Fill in the API version, kind, metadata and spec. I've named the pod example. We'll start with the redis container. We'll use the latest official redis image. The version is actually irrelevant for this example. Note the image pull policy. This is useful when dealing with floating tags like latest. The value is set to if not present. This does what it says in the tin. Kubernetes will pull the image the first time it's used and it's not pulled again after that. Kubernetes defaults to always if a latest tag is specified or if not present otherwise.

Now on to the server container. The server container is straightforward. The image is the public image from the sample application. The web server runs on port 8080 so it's declared imports. The server also requires a redis URL environment variable. We can set this in the nth key, but what's the correct value? How does the server container know where to find redis? The answer is simple. Containers in a pod may reach others on local host at their declared container port. The correct host port for this example is local host 6379. Image pull policy is omitted because Kubernetes uses if not present when the explicit tag is given. We can use the same approach for the counter and server containers. These containers require the API URL environment variable. The correct host port combo for this example is local host 8080. This is the container port for the server container.

Save the file and create the pod. Remember to include the dash dash namespace option with all kubectl commands. The output is much more interesting this time around. Now ready shows a number out of four. Status shows what's going on as well. It's best to describe the pod to see what's going on in detail. You'll see the event log is more useful now that there are multiple containers. You can see the events and all of the history.

Run kubectl get pod to verify things are running. Remember to check describe pod to see what's happening behind the scenes especially if something goes wrong. Repeat running get pod until everything starts correctly. I like to check logs for newly started applications. This gives me an easy way to verify things. The kubectl log command retrieves logs for a specific container in a given pod. It outputs a small number of recent logs by default or it can stream logs in real time.

Let's check the counter value by inspecting the logs for the poller container. You can also stream logs with the -f option. All right, cool. We've created a multi-container pod and learned how to get logs. We're slowly building up our skills and getting better. So what do we do next? Let's consider scaling out the server. What would happen? Kubernetes scales at the pod level, not at the individual container level. Scaling this pod would actually create multiple Redis containers so each pod would have its own counter. That's certainly not what we're going for. It also does not make sense to scale the server and counter at the same time. We may want to scale them individually. Breaking the application out into multiple pods and connecting them with services is the correct approach.

The previous lesson touched on services. Time to turn it up a notch. We'll split this application into the following resources. Our redis pod, a service to expose redis to the server. A server pod, a service to expose the server to the counter and poller. And finally a pod for the counter and poller. This creates three different application tiers. There is the data tier which contains the redis container, the app tier for the server and the support tier for the counter and poller. We'll use these concepts when labeling the pods.

Let's start by writing the redis service. Create a new data-tier-service.yml file with your editor. The service spec is similar to the previous exercise. Here we specify the selector with label app is equal to example and the tier is equal to data. The service exposes the port named redis. Hypothetically, we could also add a Memcache container, or any other kind of container to the data tier. Then we could add another port entry with a new container port set to whatever the relevant port value is. This is an example of exposing multiple containers via a single service. This is an important concept to understand about how services actually work. We only have one in this example though. Note that the type is not set in this example. Type defaults to cluster IP. Cluster IP creates a virtual IP inside the cluster for internal access only. Save the file when you're done.

Next, create a new data-tier-pod.yml file with your editor. We can copy the Redis container bits from the past example. Set the app and tier labels to example and data. Everything else stays just like before. Save the file when you're done.

Create these two resources in the correct namespace. Remember the dash dash namespace option. Just like before, repeat checks with kubectl get pods until everything is running. Now we could move on to the app tier.

Create a new app-tier-pod.yml file with your editor. The API version, kind and metadata are similar to the previous examples. Set the app label to example and the tier to app. The container spec is similar with one key difference. The redis container is no longer part of the pod. Instead, it is accessible via a service. Kubernetes automatically sets environment variables for each service. Specifically, there is an environment variable for the service host and environment variables for each name to port. Kubernetes uses a pattern to create the environment variables. The host variable is the capitalized and underscored service name with service host appended. Port variables are the capitalized and underscored service name with service port appended followed by the port name. Our values are EXAMPLE_DATA_TIER_SERVICE_HOST and EXAMPLE_DATA_TIER_SERVICE_PORT_REDIS. Kubernetes interpolates environment variables with a dollar parenthesis syntax. This allows composing application-specific environment variables from the Kubernetes-provided values. Note for this to work, the service must exist before the pod is created. This completes our work on the application tier pod.

Now on to the service. Create a new app-tier-service-yml file with your editor. This is similar to the previous data tier service. The name changes, the tier is set to app and then the port name and value changes. Save this file and then create the service and pod.

Now on to the support tier. We don't need a service for this. Just a pod will do. Create a new support-tier.yml file with your editor. This pod is similar to the previous app tier pod. Again, the name changes and the tier is set to support. The counter and poller containers are created with the API URL set using the environment variables provided by the app tier service. Note here we use example app service host instead. Save the file and create the pod.

Now check all the pods again. There should be three running pods creating four containers in total. Let's check the logs to see what's going on. Would you just look at that? The application is just plugging right away. It feels pretty good, huh? So where do we go next? Scaling the application is the natural next step. Let's consider how to do that for a moment using our current knowledge. We could increase the number of server pods by changing the name to something like example app tier-1 then creating example app tier-2 and so on. We could probably glue this together with some scripting. God knows I love bash, but that's probably not exactly what we want. So then what happens when we would want to reconfigure the server container? Well, let's see. We could create example app tier v1-1 and then example app tier-v2-1 and with some updated scripting, these things could probably handle that. So what happens when something goes wrong or what if there's an error in the new version? We could probably handle that by pulling the API and checking the status again with probably some scripting and some glue code on our end, but there probably should be a better way to do this. So if you don't like the scenario I just described to you, I have good news for you. Kubernetes provides a much better way to do this called deployments. They are the answer to all of the previous questions and then some, but that's a topic for the next lesson. I think we've gone far enough in this one.

Let's recap this lesson before jumping into the next one. We've covered using namespaces, structuring N-tier applications, service discovery with environment variables and checking logs. So what do you think? Are you getting excited about what Kubernetes can do? If you aren't yet, then I think the next lesson will definitely put a smile on your smile. This lesson shows that Kubernetes makes it easy to structure and design containerized applications with multiple tiers. It sets the stage for the awesome stuff in the next lesson. But, I'm a bit biased though. That's my personal favorite so I'll hopefully see you then. Cheers.

About the Author

Students4766
Courses4
Learning paths1

Adam is backend/service engineer turned deployment and infrastructure engineer. His passion is building rock solid services and equally powerful deployment pipelines. He has been working with Docker for years and leads the SRE team at Saltside. Outside of work he's a traveller, beach bum, and trance addict.

Covered Topics