Create K8s Cluster
Build Container Images
Create K8s Resources
End-to-End Application Test
K8s Network Policies
K8s Deployment Update Challenge
The course is part of this learning path
This training course is designed to help you master the skills of deploying cloud-native applications into Kubernetes.
Observe first hand the end-to-end process of deploying a sample cloud-native application into a Kubernetes cluster. By taking this course you'll not only get to see firsthand the skills required to perform a robust enterprise-grade deployment into Kubernetes, but you'll also be able to apply them yourself as all code and deployment assets are available for you to perform your own deployment:
This training course provides you with in-depth coverage and demonstrations of the following Kubernetes resources:
- Ingress/Ingress Controller
- Persistent Volume
- Persistent Volume Claim
- Headless Service
What you'll learn:
- Learn and understand the basic principles of deploying cloud-native applications into a Kubernetes cluster
- Understand how to set up and configure a locally provisioned Kubernetes cluster using Minikube
- Understand how to work with and configure many of the key Kubernetes cluster resources such as Pods, Deployments, Services, etc.
- And finally, you’ll learn how to manage deployments and Kubernetes cluster resources through their full lifecycle.
This training course provides you with many hands-on demonstrations where you will observe first hand how to
- Create and provision a Minikube Kubernetes cluster
- Install the Cilium CNI plugin
- Build and deploy Docker containers
- Create and configure Kubernetes resources using kubectl
- A basic understanding of containers and containerization
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
- Anyone interested in learning Kubernetes
- Software Developers interested in Kubernetes containerization, orchestration, and scheduling
- DevOps Practitioners
- [Instructor] Okay, welcome back. In this lecture, I'm going to provide a quick demonstration as to how to test and validate your network policies, ensuring that when deployed they behave in the desired manner. When we're creating and setting up network policies within the cluster it's incredibly useful to jump into the cluster and act or simulate as if we were one of the pods, generating network traffic to test the rules defined within the network policies. Let's see how this is done. Before we start, keep in mind that the diagram, as seen here, represents the current status of our networking policy deployment.
Okay to begin with, let's retrieve the currently deployed set of pods within the Cloud Academy name space. In particular, I want to view the pod IP addresses, therefore I'll run the command kubectl get pods -o wide. Next, I'm going to run the following kubectl run command to spin up a pod based on the busybox image and importantly applies the label role=api to it. Before I execute this command, let's review each of the parameters to ensure you understand what each parameter accomplishes.
- -rm, this automatically removes the container when it exits. We add this to keep the cluster clean, since this pod is being used on a temporary basis, just for testing.
- i and --tty often presented in the combined form of just -it is used to create and provide an in-directive shell connected to the container's stdin.
-- image busybox, this is the docker image we'll launch. Busybox is an official docker image containing many common UNIX utilities. In particular, we'll be making use of the telnet utility.
-- restart, the restart policy to apply when the container exits. We don't need it to restart.
--labels, the metadata in the form of key-value pairs.
In this demonstration, we're using the label role=api to simulate being an api pod. This allows us to test any of the network policies, which declare an ingress pod selector with the same label. Finally, the remaining double dash at the end of the command is used to separate the arguments you want to pass to the command from the kubectl arguments. In this case, we are simply requesting sh to launch. Where sh is a command language interpreter. Okay, we're all good to go. Let's execute this command and simulate being an API pod.
Excellent, the pod has started and we're now inside it. Let's now use the telnet command and attempt a network connection to the mongo-0 pod whose pod IP address is 10.43.114.71 on port 27017. This is the default port that the mongo server listens on. And as expected this has connected successfully. Again, I'll highlight the fact that we're simulating being an API pod, by virtue of the role=api label. Okay, let's exit the current telnet session by entering the key sequence, Ctrl + C, followed by e and then exit the pod by typing exit, like so.
I'll now clear the terminal and this time let's view all of the currently deployed network policies by running the command kubectl get netpol. Here we can see the three network policies deployed during the previous lecture. Let's now delete the allow-to-mongo-from-api network policy by running the command kutectl delete netpol allow-to-mongo-from-api. Okay, this looks good. The network policy has been successfully deleted. Again, let's clear the terminal and repeat the previous kutectl run command where we again attempt the same telnet connection to the same MongoDB pod on port 27017. This time you can see the connection is indeed denied. This is based on the fact that the default deny policy is now blocking the traffic.
Okay, let's roll back this last delete and reapply all network policies. We'll run command kubectl apply -f and give it the netpol/ directory name. Finally, we'll jump back into the browser and retest the application by reloading it to ensure that is functional again.
Okay, that concludes the network policy testing demonstration. The key takeaway from this lecture is knowing how to simulate and test network connections that your network policies have been explicitly designed to allow or deny.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, and Kubernetes.