The course is part of these learning paths
Course Introduction and Overview
Production and Course Conclusion
This course provides an introduction to how to use the Kubernetes service to deploy and manage containers.
Be able to recognize and explain the Kubernetes service
Be able to explain and implement a Kubernetes container
Be able to orchestrate and manage Kubernetes containers
This course requires a basic understanding of cloud computing. We recommend completing the Google Cloud Fundamentals course before completing this course.
Hello and welcome back to the Introduction to Kubernetes course from Cloud Academy. I'm Adam Hawkins and I'm your instructor for this lesson.
This is our first hands-on lesson. We've been building up to this point through the past two lessons and now it's finally time to get our hands dirty. To do that we'll cover kubectl fundamentals, Kubernetes pods, and also Kubernetes services. Our objective is to deploy nginx to Kubernetes.
You'll need access to a running Kubernetes cluster for this lesson. I recommend that you install Minikube for this lesson and we'll also use it in all of the lessons going forward. Minikube boots a running Kubernetes cluster inside a VM. But also includes kubectl and a running docker daemon that you can point your client to.
So, my friend, this is it. Time to make the magic happen. Let's rock. I like to run a simple command to test access to the cluster. I tend to use a kubectl get pods command. This will be our first kubectl command. We're going to see a lot more in this lesson. So please pay close attention to the commands and see if you can pick up on the pattern. This command lists all the pods in the default name space. We can see there's nothing running. Let's change that. Start by creating a new pod.yml file with your editor. We'll write the file together.
All Kubernetes resource files start off in the same way. There is the api version, kind, and metadata followed by the spec. Kubernetes supports multiple api version so you must, must specify the version in use. Kind indicates exactly what you think it is. This is what the resource is. Metadata includes the name and labels. Here we set the name to hello world and set a label for the same. More on labels later. Spec is the data according to the kind that matches the defined api version. It's essentially where all of the meat goes. You can refer to the official api docs for complete info on all versions and supported fields. We're only going to use a few in this example. Here we're writing a pod spec. The pod spec defines the containers in the pod. Common settings are things like the image, command options, environment variables and ports. Pods may contain multiple containers. This pod only has a single container. Start by setting the name and image. We'll use a latest tag because we don't need a particular version for this exercise. Now this should be enough to actually create the very first pod. Save this file and use kubectl to create the pod.
Now that we've created the pod we can rerun the same command to see our result. Kubectl shows the name, number of ready containers, their state, restarts and also the age of all the pods in the cluster. Note that this pod only has one container. So ready shows one of one. You should memorize this command since you'll use it all of the time. And I really mean all of the time. Let's give some more detailed information about this particular pod. Earlier we used the get command to get a list of pods. Now use the describe command to get complete information.
Hoo, that is a lot of info. Luckily, the most useful debugging information is actually at the bottom. The event log shows what went right or even what went wrong. Notice all the bookkeeping information at the top of the screen. The describe command shows everything in the spec, defaults assigned by Kubernetes and even some things Kubernetes has edited for you. Did you happen to see those secrets? So now that the container is running, how can we curl it? The kubectl describe output shows the IP but nothing about ports. You might try a port 80 on the IP. Except that wouldn't actually work. Why do you think that is? Well we've not told Kubernetes what ports to expose in that container. Let's add those now.
Reopen pod.yml in your editor. Each container may expose multiple ports This is a straight forward configuration. The internext image uses port 80 over TCP. Make the changes and save the file.
Kubernetes can apply certain changes to different kinds of resources. Unfortunately Kubernetes can not update ports on a running pod. So we need to delete the pod and recreate it. Running kubectl delete to delete the pod followed by kubectl to create. And then describe the pod again.
You don't need to describe the pod every single time, I just prefer to do this to see the result of my work and make sure that everything went off as I expected. So now you may think to try a port 80 on that noted IP but it still wouldn't work. Why do you think that is? Well, we've not defined any rules for how to access the pod. Do you remember services from the last lesson? Well we'll need to create one of those.
Create the new file named service.yml in your editor. Remember the term service from the previous lesson? In case you don't, here's the definition from our vocab session. Service is defined networking rules to expose pods to other networks. We can use a service to expose the pod to the internet. Or in this case, to us so we can curl it. Let's create a service that exposes our pod to the outside world. We'll also write this file together. The first three fields are the same as before. This time kind is set to service. Time to fill in the spec. The selector, remember another Kubernetes vocab word, defines the labels to match pods against. This example targets pods labeled with app such as hello world. Services must also define port mappings. This service targets port 80. This is the the value from pod.yml if you remember. Lastly, set the optional type. This value defines actually how to expose the surface. We'll set the value to NodePort. NodePort allocates a port for this container on each node in the cluster. Future lessons will cover alternate values.
Next, create the service with kubectl and get the services. Notice how we used kubectl get services. Kubectl follows a simple design pattern which makes it easy to manage and explore different resources. Hopefully you're picking up on the pattern. Kubectl displays the name, cluster IP, external IP, ports, and age of each service. Cluster IP is the internal, AKA private IP, for each service. The external IP is not available for node port services. But if it were then this would be the public IP for a service. Note the ports column. Kubernetes automatically allocated a port in the 30,000 plus range. Describe the service to see what other information is available. Just like before, you'll see a bunch of useful debugging information. The port was shown in the get services output and also in this output. Now that we have the port we need the node IP. Kubectl also includes commands to explore the cluster itself. You can get nodes just like any other resource.
I'm using Minikubes so you can see that my cluster only has a single node. Describe the node to get more detailed information. You'll see a ton of useful information again. There's a bunch of useful information here. Specifically pods and their cpu and memory stats. There's also host and Kubernetes system metadata. However, we're not really after the stuff right now. Pay attention for the address field or rerun and pipe to grip.
This is the node's public IP. Or the IP of my Minikube virtual machine on my host. Alternatively, you may run the minikube IP command. We took this route to demonstrate all the different kind of kubectl commands you can use to explore different aspects of your Kubernetes cluster. Now that we're on with the IP import we're finally ready to fire off our curl.
Let's pull back the curtain on what it's actually done for us. Kubernetes has scheduled a container on an available node. It's also pulled the image, provisioned an internal load balancer, connected the nginx pod to that load balancer and also allocated a unique port for the container and the load balancer. This is common work flow for these type of applications. Kubernetes orchestrated the entire process for us. Hopefully you can see this kind of declarative work full scale to larger applications as well.
This is a great weigh point in our Kubernetes journey. Here's what we covered in this lesson. We used kubectl for crud operations. We created a web server pod. We exposed that pod with a service. And we also navigated our cluster topology with kubectl. There is so much more to cover about pods and services. Unfortunately, we can't do it with this simple hello world application. We need a more complex application for that. Think microservices.
The next lesson covers multi-container pods and interservice discovery. I don't know about you, but I think it's going to be a blast. Hopefully I'll see you then.
About the Author
Adam is backend/service engineer turned deployment and infrastructure engineer. His passion is building rock solid services and equally powerful deployment pipelines. He has been working with Docker for years and leads the SRE team at Saltside. Outside of work he's a traveller, beach bum, and trance addict.