This training course is designed to help you master the skills of deploying cloud-native applications into Kubernetes.
Observe first hand the end-to-end process of deploying a sample cloud-native application into a Kubernetes cluster. By taking this course you'll not only get to see firsthand the skills required to perform a robust enterprise-grade deployment into Kubernetes, but you'll also be able to apply them yourself as all code and deployment assets are available for you to perform your own deployment:
This training course provides you with in-depth coverage and demonstrations of the following Kubernetes resources:
- Ingress/Ingress Controller
- Persistent Volume
- Persistent Volume Claim
- Headless Service
What you'll learn:
- Learn and understand the basic principles of deploying cloud-native applications into a Kubernetes cluster
- Understand how to set up and configure a locally provisioned Kubernetes cluster using Minikube
- Understand how to work with and configure many of the key Kubernetes cluster resources such as Pods, Deployments, Services, etc.
- And finally, you’ll learn how to manage deployments and Kubernetes cluster resources through their full lifecycle.
This training course provides you with many hands-on demonstrations where you will observe first hand how to
- Create and provision a Minikube Kubernetes cluster
- Install the Cilium CNI plugin
- Build and deploy Docker containers
- Create and configure Kubernetes resources using kubectl
- A basic understanding of containers and containerization
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
- Anyone interested in learning Kubernetes
- Software Developers interested in Kubernetes containerization, orchestration, and scheduling
- DevOps Practitioners
- [Instructor] Okay, welcome back. In this lecture, I'm going to do a quick find and replace across all the source files within our solution. To update environment-related variables such as the hostname, used by the end user to navigate to the front end, as well as the hostname which represents the API and to which AJAX calls are to be aimed. This will be further explained as we go.
Okay, to start with, we'll create a new directory and name it voteapp. Or run the following command. Next, I'll clone the free Cloud Academy vote app GitHub repos into it like so. Now, before we build and compile our Docker images, contained in the front end and API layers, we need to search for and update the hostname placeholder that is declared within several files. We'll use the grep command to search for these files like so. This should result in a list of the following three files. Within the front end directory, we should have the .env file, and within the Kubernetes directory, we should have the following two YAML files. The frontend.ingress.yaml file and the api.ingress.yaml file.
Okay, let's open each of these three files and describe them in a little bit more detail before we update them. Firstly, in the React front end, in the root project folder there is a file named .env, abbreviated for environment Let's display the contents of this file. As you can see, the .env file contains a single React environment variable named REACT_APP_APIHOSTPORT. And this is sent to the host port combination that the API AJAX requests will be delivered to. We need replace this with the actual real hostname that we'll use at runtime.
Since our application is going to be deployed into our Kubernetes cluster, which itself is hosted on our EC2 instance, then we need to sync this to the public IP address that has been assigned and mapped to our EC2 instance. But we can't just use the raw public IP address itself. Why not? You may be wondering. Well, our API pods will be accessed through an NGINX Ingress. This itself requires that we use an actual domain name rather than a raw IP address. So to work within this requirement we'll leverage a really great, publicly available tool or service found at https://nip.io.
Exactly what does nip.io provide? Well, as the website states, it provides dead simple wildcard DNS for any IP address. And it's free. Basically. This means we can use a hostname as if it's been preregistered within our DNS system and have it magically resolve at runtime to an IP address of interest. In our case, the public IP address, or the elastic IP address, that has been assigned to our EC2 instance.
All we have to do is embed the IP address within a hostname, which finishes with the domain nip.io and is in a particular format that nip.io understands. Then, nip.io will cleverly extract the IP address and resolve to it. This way it's not just for public IPs, but for all IPs. To ensure you understand how we're gonna use this service, I'll perform a quick demonstration of how it works. First, we'll use the dig utility to perform a DNS lookup on the hostname blah.192.168.10.1.nip.io. And as you can see, within the Answer section, it resolves, as expected, to 192.168.10.1. The remaining two files are Kubernete's configuration YAML files, used to create and provision an Ingress resource, which in our cluster setup will use the NGINX Ingress controller. As previously mentioned, these require us to specify an actual DNS name for the spec/rules/hostname property.
Okay. Let's, now, go ahead and update these three files with the public IP address currently assigned to our EC2 instance which is hosting our Kubernetes cluster. First, we'll retrieve the public IP address currently mapped to the EC2 instance and assign it to a shell variable like so. Note, if you're following along, your public IP address will be different. Next, I'll use the egrep command to, again, find the files that contain the placeholder and then pipe the discovered file paths into the xargs utility, which itself uses the sed utility to perform an inline replacement of the placeholder token with the actual IP address held in the shell variable like so. To confirm that the update has worked, let's now cat out the current contents of each of the three files.
Okay, that looks really good. We're now ready to move on and start compiling and packaging the front end and API Docker images.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).