Start course
2h 30m

Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers


You should be familiar with:

  • Working with Docker and be comfortable using it at the command line

Source Code

The source files used in this course are available here:


August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics

May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics



So in our previous lesson, we created a webserver Pod but at the moment, it's inaccessible apart from other Pods in the container network, which isn't really useful. Even for pausing the container network, it isn't very convenient to access the webserver as it is because Pods if you'd be able to find the IP address of the webserver Pod and keep track of any changes to it. So remember that Kubernetes will reschedule Pods on to other nodes, for example if the node fails.

But what happens if you have a Pod that fails? Once the Pod is rescheduled it will be assigned an IP address from the available pool of addresses and not necessarily the same IP address it had before. So to overcome all of these networking issues, Kubernetes employs services. thinking back to the Kubernetes' definition for a service, a service defines networking rules for accessing Pods in the cluster and from the internet you can declare a service to access a group of Pods using labels. And in our example, we can use our app label just like the webserver Pod for the services target. Clients could then access the service at a fixed address. And the services networking rules will direct client request to a Pod in the selected group of Pods.

In our example, there is only one Pod but in general there can be many. The service will also distribute these requests that come into it across the Pods to balance the load. Let's visualize how we'll use the service to solve our problem of accessing the webserver running in the Pod.

First, we're gonna create a service that selects Pods with the app=webserver label. That will cause the service to act as a kind of internal load balancer across those Pods. The service will also be given a static IP address and Port by Kubernetes that will allow us to access the service from outside of the container network, and even outside of the cluster.

Let's see how we do it. Our first three fields are set to the same as before. The kind is now Service, metadata uses the same label as the Pod since it is related to the same application. This isn't required but it is a good practice to stay organized. Now for the spec, the selector is our important field. The selector defines the labels to match the Pods against. At this example of targets Pods with the app=webserver, which will select the Pod that we've already created. Services must also define port mappings. So, this service targets Port 80. This is the value of the Pods' container port.

Lastly, is the optional type. This value defines actually how to expose the Service and we're gonna set it to NodePort. NodePort allocates a port over this service on each node in the cluster. By doing this, you can send a request to any node in the cluster on the designated port and be able to reach that Service. The designated port will be chosen from the set of available ports on the nodes, unless you specify a NodePort as part of the specs ports.

Usually it is better like Kubernetes shows the NodePort from the available ports to avoid the chance that your specified port is already taken. That would cause the service to fail to create In future lessons, we're gonna be covering alternate values to service types, so don't worry.

Now, let's create the service with our familiar kubectl create -f 2.1. And we're gonna list the services with kubectl get services. Notice that both commands are really familiar. Kubectl follows a simple design pattern which makes it easy to manage and explore different resources. Kubectl also displays the name, Cluster-IP, External-IP, Ports, and Age of each service.

Let's bring it down, starting with Cluster-IP. This is our private IP for each service. Our External-IP is not available for NodePort services but if it were, then this would be the public IP for a service. Note that the Ports column, Kubernetes will automatically allocate a Port in the Port range allocated for NodePorts which is commonly port numbers between 30,000 and 32,767.

Let's describe the service to see what other information is available with kubectl describe service webserver. Just like before, you'll see a bunch of useful debugging information. The Port was shown in the get services output and also in this output. But we also can see the Endpoint, which is the address of each Pod in the selected group, along with a container port. If there were multiple Pods selected by the label, then you would see each of them listed here.

Kubernetes automatically adds and removes these endpoints as matching Pods are created and deleted, and so you don't need to do anything to manage those endpoints. Now that we know that the NodePort is on we need a nodes IP, it can be any nodes IP. And one way to list them is to grep for this address in the described node output and add the -A option to include lines after the match.

So let's do that, kubectl describe nodes and we're to pipe it to grep -i address -A 1. Nodes are resources in the cluster, just like Pods and services. So, you can use the get and describe commands on them. You can check out all the information in the describe output on your own, or for right now, we just need those IPs. The IP addresses are the internal or private IPs of our nodes inside of our cluster.

Our lab VM is in the same virtual network, so it can reach the nodes using these addresses. And I've allowed incoming traffic on the NodePort range from the lab instance in the firewall rules to allow the request. Choose any of the addresses and use the curl command to send an http request to the IP with a NodePort upended. That is the raw html output being served up by Nginx. You can try any of the node IPs and it will give your exact same result, all thanks to a Kubernetes service.

In this lesson, we saw that services allow us to expose Pods using a static address, even though the addresses of the underlying Pods may be changing. We also specifically used a NodePort service to gain access to the service from outside of the cluster on a static Port that is reserved on each node in the cluster. This allowed us to access the service by sending a request to any of the nodes, just not the node that is running the Pod. There is more to say about Pods and services. We will use more complex application in the future to illustrate some of the remaining topics in the next couple of lessons. Think microservices will start by covering multi-container Pods, to continue on when you're ready.

About the Author
Learning Paths

Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.