This training course is designed to help you master the skills of deploying cloud-native applications into Kubernetes.
Observe first hand the end-to-end process of deploying a sample cloud-native application into a Kubernetes cluster. By taking this course you'll not only get to see firsthand the skills required to perform a robust enterprise-grade deployment into Kubernetes, but you'll also be able to apply them yourself as all code and deployment assets are available for you to perform your own deployment:
This training course provides you with in-depth coverage and demonstrations of the following Kubernetes resources:
- Ingress/Ingress Controller
- Persistent Volume
- Persistent Volume Claim
- Headless Service
What you'll learn:
- Learn and understand the basic principles of deploying cloud-native applications into a Kubernetes cluster
- Understand how to set up and configure a locally provisioned Kubernetes cluster using Minikube
- Understand how to work with and configure many of the key Kubernetes cluster resources such as Pods, Deployments, Services, etc.
- And finally, you’ll learn how to manage deployments and Kubernetes cluster resources through their full lifecycle.
This training course provides you with many hands-on demonstrations where you will observe first hand how to
- Create and provision a Minikube Kubernetes cluster
- Install the Cilium CNI plugin
- Build and deploy Docker containers
- Create and configure Kubernetes resources using kubectl
- A basic understanding of containers and containerization
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
- Anyone interested in learning Kubernetes
- Software Developers interested in Kubernetes containerization, orchestration, and scheduling
- DevOps Practitioners
- [Instructor] Okay, welcome back. This is the first of several lectures where we'll perform the end-to-end deployment of our sample cloud-native application into a Kubernetes cluster. In this lecture, I'm going to demonstrate provisioning a Kubernetes cluster using Minikube.
But before I start, a quick reminder that the following CloudAcademy GitHub repository exists and contains all the configuration files for the declarative management and rollout of our application into the Kubernetes cluster. If you haven't already done so, it's highly recommended that you clone your own copy of this repo such that you can follow along and apply the same config within your own cluster. As already mentioned, I've purposely chosen to use Minikube, which is an excellent tool for learning, prototyping, and testing Kubernetes deployments directly on your own workstation.
In this demonstration, I'm going to launch an EC2 instance using a customized version of the Amazon Linux 2 AMI, which has Docker and Minikube preinstalled. This image is publicly available, so you can use it, too, assuming you have access to an AWS account. If not, follow the instructions on the Minikube home page to install Minikube directly on your own workstation. Okay, let's proceed. I'm going to first start up the AWS console and launch an instance of the CloudAcademy Minikube AMI. I will configure it as a t3 xlarge instance style.
The t3.xlarge instance has four CPUs and 16 gigabyte of RAM with a 20-gigabyte root volume. To ensure that we have adequate resources available, remembering that we are not only running a mini Kubernetes cluster, but also the mini pods that will make up our application. Minikube itself requires at least two CPUs to install and run. I'll skip over the security group configuration other than to say that it allows all incoming traffic originating from my external public IP address. Additionally, I will assign a public IP address to the instance to allow not only SSH connections for remote management to provision the Kubernetes cluster using Minikube, but to also allow incoming external traffic to the sample cloud-native application which gets deployed into the cluster. Here we can see the following elastic IP address has been allocated to us, 188.8.131.52. We'll associate this with our EC2 instance, which eventually will be running and hosting our Kubernetes cluster.
Okay, now that the instance is up and running, let's SSH into it, like so. As previously mentioned, this instance is based on an AMI that I created and made public. This AMI has been derived from Amazon Linux 2 and has the following additional tools and services preinstalled and already configured. Docker. This provides the container runtime, which will be used by Minikube, and will allow us to also package and build the container images that will be deployed into our Kubernetes cluster. Minikube. This is used to provision the actual Kubernetes cluster.
Okay, so the first thing we'll do is to start up Minikube, which in turn will create and provision us a new Kubernetes cluster. We do so by running the command seen here. We set the --vm-driver to none, since we're provisioning Kubernetes on top of Docker, which itself is already running directly on the host, i.e., we're not going to be using a hypervisor such as KVM or VirtualBox, which are, incidentally, both supported. We'll also set the --network-plugin to CNI, which is an abbreviation for container network interface. The reason for this is that we'll later set up and install the Celian CNI plugin, which provides an implementation for CNI. In particular, we are using the Celian CNI plugin to use its powerful L4 and L7 network policy features. We'll go into this in much more detail later on.
Okay, so the provisioning process for Minikube is completed. Let's now run the command sudo minikube status. And as you can see, Minikube has indeed started up successfully and created a Kubernetes cluster for us. Running the command docker images allows us to see each of the images that have been pulled down by Minikube during the provisioning process. Additionally, we can also run the docker ps command to see all of the underlying Docker containers that are up and running and for which collectively make up the Kubernetes cluster.
Okay, let's now test that the kubectl command is installed and available. We do so by executing kubectl version to see the version details on it. Right, next we'll attempt a connection to the Kubernetes cluster. We'll attempt this by running the command kubectl get nodes. This is expected to fail, since we've yet to configure the credentials used to authenticate to the cluster. We can grant access to ourselves by first configuring a kube config file. A kube config file is used to organize information about clusters, users, namespaces, and authentication mechanisms. We need to create a .kube directory within the current user's, in our case, ec2-user, home directory, like so. Here we'll use pre-generated certificates to authenticate ourselves to the cluster. These were created by Minikube during the earlier cluster provisioning stage. We'll use the sudo command to copy the CA certificate, the client certificate in the client key, into the ec2-user/.kube directory. Next, we change the ownership back to the ec2-user for all files and folders recursively within the .kube directory.
Finally, we need to query the cluster IP address for the cluster that Minikube created for us. As you can see, we have captured the IP address and stored it within a variable named IP, which we then echoed back out to the console for verification. Okay, now that we have all our details ready, we can go ahead and generate a kube config file. The approach we'll take is to use the cat command and have it to write out the contents that we paste into the shell up to but not including the EOF character sequence directly back out to the kube config file. Notice how we inject the Kubernetes cluster IP address.
Okay, so we have now successfully generated the kube config file, and it resides in the expected place. At this stage, we should now be able to successfully connect and authenticate. Let's give it a try. Again, we'll run kubectl get nodes. And, excellent. We can see that we've connected and returned information about the running nodes within the Kubernetes cluster.
Okay, that completes this lecture. Go ahead and close it and we'll see you in the next one where we introduce you to Celian, which we'll use for networking policy purposes.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).