1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Service Discovery

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW3m 31s
Deploying Containerized Applications to Kubernetes
6
Pods
14m 55s
7
Services
7m 29s
10
11
13
Probes
10m 34s
15
Volumes
13m 20s
The Kubernetes Ecosystem
Course Conclusion
18

The course is part of these learning paths

Building, Deploying, and Running Containers in Production
5
3
14
1
Introduction to Kubernetes
1
1
3
1
more_horizSee 2 more
Start course
Overview
Difficulty
Beginner
Duration
2h 30m
Students
14615
Ratings
4.4/5
starstarstarstarstar-half
Description

Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers

Prerequisites

You should be familiar with:

  • Working with Docker and be comfortable using it at the command line

Source Code

The source files used in this course are available here:

Updates

August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics

May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics

 

Transcript

We've seen Services in action, and in the context of allowing external access to pods running in the cluster, when we created them, with a node port. It's time to see how services are useful within the cluster. We'll split our example Microservices application into three pots, one for each tier. Remember that we use the fact that the containers in the same pod can communicate with each other using the local host. But that's not going to work with our multi-pod design.

That's where Services come in. Services provide a static end point to access pods in each tier. We could directly use the individual pod IP addresses on the container network, but that would cause the application to break when pods are restarted, because their IP address could change. An added benefit of Services is they also distribute load across the selected group of pods, allowing us to take advantage of the scaling application tier across multiple server pods. 

So to realize these benefits, we need to create a data tier service in front of the Redis pod, and an application to your Service in front of the server pod. There are two Service discovery mechanisms built into Kubernetes. The first are environment variables, and the second is DNS. Kubernetes will automatically inject environment variables into containers that provide the address to access services. The environment variables follow a naming convention so that all you need to know is the name of the service to access it. Kubernetes also constructs DNS records based on the service name and containers are automatically configured to clear the clusters, DNS, to discover those services. You'll see examples of both techniques in this lesson.

We'll start with creating a new namespace to organize the resources for this lesson. It's called Service Discovery, and we'll do that with kube control create, dash F 4.1. Moving on to the data tier. We have a manifest that includes multiple resources. With YAML, we're allowed to create multiple resources by separating them with three hyphens. It's possible to cram all the pods and services into one file, but separating them by tier mimics the way we want to manage each tier independently.

We have a service, and now we have our Redis pod. Both are named data tier. The pod has a tier label, which is used by the service as its selector. In our example, we only have one Microservice in the data tier, but that won't be the case in general. You can include as many labels as necessary in the selector to get just what you need. We can get by with just this one label, in this case. Services can also publish more than one port, which makes a naming the ports mandatory to identify them. We only have one, so the name is optional.

In YAML, everything after our pound or hashtag symbol is a comment. Comments are for readability, and don't affect how Kubernetes interprets the manifest. Lastly, we set the type to cluster IP, which is the default so that the line could be omitted. Cluster IP creates a virtual IP inside the cluster for internal access only. So we can now use kube control, create, dash F 4.2 YAML and append the namespace with dash N service-discovery.

To create the resources, the command is the same, regardless of how many resources are specified in the file. The resources in the file are created in the order they are listed in the file. Let's check that the pod is running with kube control, get pod dash N service-discovery. Then describe the service with kube control describe service dash N service-discovery data tier. To make sure that our service has a cluster IP, and that one endpoint corresponds to the data tier pod selected by the service.

Let's move on to the app tier. Again, we have a service and a pod. The service selects the pods with a tier label, matching the server pod declaration. The pod spec is the same as the one before with one exception, the value of Redis URL environment variable is set using environment variable set by Kubernetes over to service discovery. The value used to be local host 6379, but now we need to access the data tier service. 

There are separate environment variables made available to you. The service cluster IP address is available using the environment variable, following the pattern of a service name in all capital letters, with hyphens replaced by underscores followed by underscore service, underscore host in all caps. By knowing the service name you construct the environment variable name, to discover that service IP address. 

In our example, with the environment variable and its data tier service host, the port environment variable is similar with host replaced by port. In our example, that is data tier service port, if the port includes a name, you can also append and underscore port name in all caps, hyphens replaced by underscores, which is data tier service port Redis, in our example. The data tier service only declares one port, so the appended name is optional. 

As a best practice you can append the service name to tolerate adding ports to the service in the future. When using environment variables in the value field, you need to enclose the variable name in parentheses and precede it with a dollar sign. This allows composing container environment variables from the Kubernetes provided values. When using environment variables for service discovery, the service must be created before the pod in order to use environment variables for service discovery. That is, Kubernetes does not update the variables of running containers. They only get set at startup. 

The service must also be in the same namespace for the environment variables to be available. So let's create our application tier with kube control, create dash F 4.3 YAML, dash N service discovery. Now onto the support tier. We don't need a service for this tier, just a pod will do, and it contains the counter and polar containers used before. This time we're gonna be using DNS for service discovery of the app tier service. 

Kubernetes will add a DNS A records for every service. The service DNS names follow the pattern of a service name, dot service namespace. In our example that is, app dash tier.service dash discovery. However, if the service is in the same namespace, then you can simply only use the service name. The polar omits the namespace in this manifest.

No need to convert hyphens to underscores, or use all caps when using DNS service discovery. The cluster DNS resolves the DNS name to the service IP address. You can get service port information using DNS SRV records, but that isn't something that we can use in the manifest file. So I'll have to either hard-code the port information or use the service port environment variable. The counter uses a hard-coded port, and the polar uses the port environment variable for illustration. 

It is possible to use the DNS SRV port record to configure the pod on start-up using something called iNET containers, but we're going to be covering that later on in this course. So let's create the support tier. Starting with Kube control create dash F 4.4 YAML, and then the namespace service discovery. 

Now let's check all the pods, with kube control, get pods, namespace service discovery. There are three running pods creating four containers in total. So let's check the polar logs to see what's going on with our account. With kube control logs dash N service discovery support tier polar, and let's follow them. Look at that. The application is just plugging away, and that is such a satisfying result.

Let's recap this lesson before jumping into the next one. We've covered structuring a varying number of applications using Services as interfaces between tiers. We use the cluster IP type of service. We're accessing the data and application tiers within the cluster. We also covered how Kubernetes Services works with environment variables and DNS. That allowed us to refactor our multi container pod application into instead a multi-tier application that we stood up in this lesson.

When using environment variables for service discovery, the service must be created for the pod, before the pod, in order to use the environment variables for that service discovery. Their service must also be in the exact same namespace. DNS records overcome the shortcomings of environment variables. DNS records are added and removed from the clusters DNS as services are created and destroyed. The DNS name for services include a namespace, allowing communication with services and other namespaces. And finally SRV DNS records are created for service port information.

So what do you think? Are you getting excited about the capabilities that Kubernetes can do? It just keeps getting better and better. And it's gonna get better in the next lesson. To put it in context, consider how we would scale our current application. We could increase the number of server pods by changing the name to something like example app tier dash one, then creating example app tier dash two, and so on. And we could glue this all together with some scripting, a bit of extra work to make the scaling easy. But what then happens when we would want to reconfigure the server container? Well, let's see.

We could create an example app tier version one dash one and then example app tier version two dash one, with some updated scripting. These things could probably handle that, but what happens when something goes wrong? Or what if there's an error in the new version? We could probably handle that by pulling the API and checking the status again, with probably some more scripting, include some more code, but there should probably be a better way to do this, which is exactly what the next lesson is going to be covering. And that is deployment. So let's learn about it in the next lesson.

About the Author
Avatar
Jonathan Lewey
DevOps Content Creator
Students
17256
Courses
8
Learning Paths
3

Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.