Overview of Kuberentes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
The course is part of these learning paths
Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
The source files used in this course are available in the course's GitHub repository.
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
We have created a web server pod but at the moment it is inaccessible apart from other pods in the container network. Even for pods in the container network it isn’t very convenient to access the web server as it is because pods need to find the IP address of the web server pod and keep track of any changes to it. Remember that kubernetes will reschedule pods onto other nodes, for example if the node fails. Once the pod is rescheduled it will be assigned an IP address from the available pool of addresses and not necessarily the same IP address it had before.
To overcome all of these challenges Kubernetes has services. Thinking back to the Kubernetes definition for a service, a service defines networking rules for accessing pods in the cluster and from the internet. You can declare a service to access a group of pods using labels. In our webserver example we can use our app label to select the web server pod for the service target. Clients can access the service at a fixed address and the service’s networking rules will direct client requests to a pod in the selected group of pods. In our example there is only one pod but in general there can be many. The service will also distribute the requests across the pods to balance the load.
Let's visualize how we’ll use the service to solve our problem of accessing the web server running in the pod.
We’ll create a service that selects pods with the app=webserver label. That will cause the service to act as a kind of internal load balancer across those pods. The service will also be given a static IP address and port by Kubernetes that will allow us to access the service from outside of the container network and even outside of the cluster, like the Lab VM. Let’s see how to do it.
The first three fields are the same as before. The kind set to service. The metadata uses the same label as the pod since it is related to the same application. This isn’t required but it is a good practice to stay organized. Now for the spec. The selector defines the labels to match pods against. This example targets pods labeled with app=webserver, which will select the pod we created. Services must also define port mappings. This service targets port 80. This is the value of the pods container port. Lastly, is the optional type. This value defines how to actually expose the service. We'll set the value to NodePort. NodePort allocates a port for this service on each node in the cluster. By doing this you can send a request to any node in the cluster on the designated port and be able to reach the service. The designated port will be chosen from the set of available ports on the nodes, unless you specifiy a nodeport as part of the specs ports. Usually it is better to let Kubernetes choose the node port from the available ports to avoid the chance of your specified port already being taken. That would cause the service will fail to create. Future lessons will cover alternate values of service types.
Create the service with the familiar
kubectl create -f 2.1
kubectl get services.
Notice both commands are both familiar. kubectl follows a simple design pattern which makes it easy to manage and explore different resources. kubectl displays the name, cluster IP, external IP, ports, and age of each service. Cluster IP is the internal, AKA private IP, for each service. The external IP is not available for node port services. But if it were then this would be the public IP for a service. Note the ports column. Kubernetes automatically allocated a port in the port range allocated for node ports which is commonly port numbers between 30000 and 32767. Describe the service to see what other information is available
kubectl describe service webserver
Just like before, you'll see a bunch of useful debugging information. The port was shown in the get services output and also in this output. You can also see the Endpoints which is the address of each pod in the selected group along with the container port. If there were multiple pods selected by the label then you would see each of them listed here. Kubernetes automatically adds and removes the endpoints as matching pods are created and deleted. You don’t need to worry about that.
Now that we know the node port we need a node’s IP. It can be any node’s IP. One way to list them is to grep for address in the describe node output and add the - capital a option to include lines after the match
kubectl describe nodes | grep -i address -A 1
Nodes are resources in the cluster just like pods and services so you can use get and describe on them. You can check out all the information in the describe output on your own, right now we just need those IPs. The IP addresses are the internal or private IPs of the nodes in the cluster. The Lab VM is in the same virtual network so it can reach the nodes using these addresses and I’ve allowed incoming traffic on the node port range from the Lab instance in the firewall rules to allow the request. Choose any of the addresses and use curl to send an http request to the IP with the node port appended
That is the raw html output being served up by nginx. You can try any of the node Ip’s and it would give the same result. All thanks to the Kubernetes service.
In this lesson we saw that
Services allow us to expose pods using a static address even though the addresses of the underlying pods may be changing.
We also specifically used a nodeport service to gain access to the service from outside of the cluster on a static port that is reserved on each node in the cluster. This allowed us to access the service by sending a request to any of the nodes not only the node that’s running the nginx pod.
There is more to say about pods and services. We will use a more complex application to illustrate some of the remaining topics in the next couple of lessons. Think microservices.
We’ll start by covering multi-container pods. Continue on when you’re ready.
About the Author
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.