1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Introduction to Kubernetes

Service Discovery

The course is part of these learning paths

Certified Kubernetes Administrator (CKA) Exam Preparation
course-steps 4 certification 2 lab-steps 6
Introduction to Kubernetes
course-steps 1 certification 1 lab-steps 3 description 1

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW4m 6s
Deploying Containerized Applications to Kubernetes
6
Pods
11m 34s
7
Services
5m 10s
10
11
13
Probes
8m 26s
15
Volumes
11m 42s
The Kubernetes Ecosystem
Course Conclusion
18
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 58m
Students3570
Ratings
4.4/5
star star star star star-half

Description

Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

The source files used in this course are available in the course's GitHub repository.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers

Prerequisites

You should be familiar with:

  • Working with Docker and be comfortable using it at the command line

Updates

August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics

 

Transcript

We have seen services in action in the context of allowing external access to pods running in the cluster when we created a node port service to access a web server. It’s time to see how services are useful within the cluster. 

 

We'll split our example microservices application into three pods, one for each tier. Remember that we used the fact that containers in the same pod can communicate with each other using localhost. That isn’t going to work with our multi-pod design. That’s where services come in. Services provide a static endpoint to access pods in each tier. We could directly use the individual pod IP addresses on the container network but that would cause the application to break when pods restarted because their IP address could change. 

 

An added benefit of services is that they also distribute load across the selected group of pods allowing us to take advantage of scaling the application tier out across multiple server pods. So to realize these benefits we need to create a data tier service in front of the redis pod and an application tier service in front of the server pod.

 

There are two service discovery mechanisms built into Kubernetes. The first is environment variables and the second is DNS. Kubernetes will automatically inject environment variables in containers that provide the address to access services. The environment variables follow a naming convention so that all you need to know is the name of the service to access it. Kubernetes also constructs DNS records based on the service name and containers are automatically configured to query the cluster’s DNS to discover services. You will see examples of both techniques in this lesson.

 

For this exercise we’re not interested in accessing the support tier pods through service. Let’s see how the manifests look for this new design.

 

We’ll start with a new namespace to organize the resources for this lesson. It is called service discovery. 

kubectl create -f 4.1

Moving on to the data tier, we have a manifest that includes multiple resources. The YAML format allows us to declare multiple resources by separating them with three hyphens. It’s possible to cram all the pods and services into one file but separating them by tier mimics the way we want to manage each tier independently. We have a service and our redis pod. Both are named data tier. The pod has a tier label which is used by the service as its selector. In our example we only have one microservice in the data tier, but that won’t be the case in general.You can include as many labels as necessary in the selector to select just what you need.  We can get by with just the one label selector in our case. Services can also publish more than one port which makes naming the ports mandatory to identify them. We only have one so the name is optional. In yaml everything after the pound or hashtag symbol is a comment and appears in green. Comments are for readability and don’t affect how kubernetes interprets the manifest. Lastly we set the type to cluster ip, which is the default so the line could be omitted. Cluster IP creates a virtual IP inside the cluster for internal access only.

We can now use 

kubectl create -f 4.2.yaml -n service-discovery

To create the resources. The command is the same regardless of how many resources are specified in the file. The resources in the file are created in the order they are listed in the file.

 

Check that the pod is running with 

kubectl get pod -n service-discovery

Then describe the service 

kubectl describe service -n service-discovery data-tier

To make sure the service has a Cluster IP and that it has one endpoint that corresponds to the data-tier pod selected by the service.

 

Now let’s move on to the app tier. Again we have a service and a pod. The service selects the pods with the app=tier label, matching the server pod declaration. The pod spec is the same as before with one exception, the value of the REDIS_URL environment variable is set using environment variables set by Kubernetes for service discovery. The value used to be localhost:6379 but now we need to access the data tier service. There are separate environment variables made available to you. The service cluster IP address is available using the environment variable following the pattern of service name in all capital letters with hyphens replaced by underscores followed by underscore service underscore host in all caps. By knowing the service name you can construct that environment variable name to discover the service IP address. In our example the environment variable is data tier service host. The port environment variable is similar with HOST replace by PORT. In our example that is data tier service port. If the port includes a name, you can also append underscore port name in all caps and hyphens replaced by underscores. Which is data tier service port redis in our example. The data tier service only declares one port so the appended name is optional. As a best practice you can append the service name to tolerate adding ports to the service in the future.

When using environment variables in the value field, you need to enclose the variable name in parentheses and precede it with a dollar sign. This allows composing container environment variables from the Kubernetes-provided values.

 

When using environment variables for service discovery the service must be created before the pod in order to use environment variables for service discovery. That is Kubernetes does not update the environment variables of running containers, they only get set at startup. The service must also be in the same namespace for the environment variables to be available.

 

Let’s create the application tier

kubectl create -f 4.3.yaml -n service-discovery

 

Now on to the support tier. We don't need a service for this tier. Just a pod will do and it contains the counter and poller containers used before. This time we use DNS for service discovery of the app tier service. Kubernetes will add DNS A records for every service. The service DNS names follow the pattern of service name dot service namespace. In our example that is app-tier.service-discovery. However, if the service is in the same namespace then you can simply use only the service name. The poller omits the namespace in this manifest. No need to convert hyphens to underscores or use all caps when using DNS service discovery. The cluster DNS resolves the DNS name to the service IP address. You can get service port information using DNS SRV records but that isn’t something we can use in the manifest so we have to either hard-code the port information or use the service port environment variable. The counter uses a hard-coded port and the poller uses the port environment variable for illustration. It is possible to use the DNS SRV port record to configure the pod on startup using something called initcontainers which we will cover later on in the course.

 

Let’s create the support tier

kubectl create -f 4.4.yaml -n service-discovery

Now check all the pods again. 

kubectl get pods -n service-discovery 

There are three running pods creating four containers in total. Let's check on the poller logs to see what's going on with our count. Enter

kubectl logs -n service-discovery support-tier poller -f

To follow the logs. Would you just look at that? The application is just plugging right away. A satisfying result. 

 

Let's recap this lesson before jumping into the next one. We've covered structuring N-tier applications using services as interfaces between tiers. We used the cluster IP type of service for accessing the data and application tiers within the cluster. 

 

We also covered how Kubernetes service discovery works with environment variables and DNS. That allowed us to refactor our multi-container pod application into the multi-tier application that we stood up in this lesson.

When using environment variables for service discovery the service must be created before the pod in order to use environment variables for service discovery. The service must also be in the same namespace.

DNS records overcome the shortcomings of environment variables. DNS records are added and removed from the cluster’s DNS as services are created and deleted. The DNS name for services include the namespace allowing communication with services in other namespaces. SRV DNS records are created for service port information 

 

So what do you think? Are you getting excited about what Kubernetes can do? It just keeps getting better with the next lesson. To put it in context, consider how we could scale our current n-tier application.  We could increase the number of server pods by changing the name to something like example app tier-1 then creating example app tier-2 and so on. We could probably glue this together with some scripting. A bit extra work but worth it to make scaling easy. 

So then what happens when we would want to reconfigure the server container? Well, let's see. We could create example app tier v1-1 and then example app tier-v2-1 and with some updated scripting, these things could probably handle that. So what happens when something goes wrong or what if there's an error in the new version? We could probably handle that by polling the API and checking the status again with probably some scripting and some glue code on our end, but there probably should be a better way to do this. 

The good news is that there is a much better way. Let’s learn about it in the next lessons on deployments.

About the Author

Students36803
Labs97
Courses11
Learning paths7

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.

Covered Topics