The course is part of this learning path
Create K8s Cluster
Build Container Images
Create K8s Resources
End-to-End Application Test
K8s Network Policies
K8s Deployment Update Challenge
This training course is designed to help you master the skills of deploying cloud native applications into Kubernetes.
Observe first hand the end-to-end process of deploying a sample cloud native application into a Kubernetes cluster. By taking this course you'll not only get to see firsthand the skills required to perform a robust enterprise grade deployment into Kubernetes, but you'll also be able to apply them yourself as all code and deployment assets are available for you to perform your own deployment:
This training course provides you with in-depth coverage and demonstrations of the following Kubernetes resources:
- Ingress/Ingress Controller
- Persistent Volume
- Persistent Volume Claim
- Headless Service
What you'll learn:
- Learn and understand the basic principles of deploying cloud native applications into a Kubenetes cluster
- Understand how to setup and configure a locally provisioned Kubernetes cluster using Minikube
- Understand how to work with and configure many of the key Kubernetes cluster resources such as Pods, Deployments, Services etc.
- And finally, you’ll learn how to manage deployments and Kubernetes cluster resources through their full lifecycle.
This training course provides you with many hands on demonstrations where you will observe first hand how to
- Create and provision a Minikube Kubernetes cluster
- Install the Cilium CNI plugin
- Build and deploy Docker containers
- Create and configure Kubernetes resources using kubectl
- A basic understanding of containers and containerisation
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
- Anyone interested in learning Kubernetes
- Software Developers interested in Kubernetes containerisation, orchestration, and scheduling
- DevOps Practitioners
- [Instructor] Okay welcome back. In this lecture we'll now deploy the Nginx Ingress Controller into the cluster. This will allow us to direct external http based traffic into our front end and API cluster hosted services. In this lecture, the following Kubernetes resources will be created to support the setup of the Nginx Ingress Controller.
A namespace, dedicated to the Nginx Ingress Controller, three config maps used to store various config properties, a service account used by the Nginx Service Controller, a cluster role, used to provide the required privileges to the Nginx Ingress Controller, a role binding and cluster role binding, and finally a deployment, which installs the actual Nginx Ingress Controller pod. We kick off the deployment by running the following kubectl apply dash F command with an http URL which is itself, a pointer to where the Nginx Ingress Controller yaml file is located.
Excellent, we can see that all of the resources have been created successfully. We can now examine each of the resources that we created in the new Ingress dash Nginx namespace by running the command kubectl get all dash N for name space ingress dash Nginx. Here we can clearly see the Nginx Ingress Controller pod in a running status. We can also see the respective deployment and replica set resources. Next I'll clear the terminal and then take a closer look at the deployment, like so. I'll run the following command: kubectl get deploy Nginx Ingress Controller dash N for name space ingress dash Nginx. Looks good, however we need to perform a minor tweak on this resource to complete it's networking set up, ensuring that it will play well with the Cilium networking policies that we deploy later on.
To perform this update we first need to retrieve back the current configuration and write it out to a yaml file and save it to the file system. For this requirement, I'll run the following command: kubectl get deploy Nginx Ingress Controller dash N ingress Nginx dash O yaml. Notice the use of the dash O parameter, which is set to yaml, and how we redirect this output to the named file. Okay next I'll do a directory listing in the current directory, and here we can see the newly generated Nginx Ingress Controller dot yaml file.
Moving on we'll edit this file using vim, and update it to use the host network property with it being set to true like so. It's important that the host network property is located exactly at the position as seen here. I'll save and exit. We then reapply this configuration back into the cluster using kubectl. Okay that looks good.
The Nginx Ingress Controller has successfully been reconfigured. At this stage we can now retest accessing both the front end and API http end points externally. To do this, I'll first query for the public IP address that is assigned to this EC2 instance hosting our cluster. Here again we can see that it is 220.127.116.11, let's take a copy of this. Jumping into a new terminal session on my actual workstation, I can test external access via the Nginx Ingress Controller by running the following curl commands. First we'll test the API like so. I'll send the first request to http API.18.104.22.168.NIP.IO/OK. And, excellent, it's responded correctly. This is a great result and highlights the fact that the networking path from the local workstation, all the way back to the back end API pods, is working, implying that the Nginx Ingress Controller is up and running and correctly configured. Next, I'll test the languages end point and pipe the response into the JQ utility like so.
Brilliant, we can see here that we've pulled data out of the Mongo replica set, which has been formatted for us by the JQ utility. Okay next we'll test external access to the front end, again using curl. And again, this looks promising, at it appears as if html has been returned in the response, which is what we are expecting. This is the end result that we're after, and it confirms that both our front end and API are accessible externally.
Okay we are all set to go, this is the moment we've been building up to. Let's go ahead and now test out the full end to end solution in the next lecture.
About the Author
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.