If you work with Kubernetes, then GitOps is going to make your world a better place by enabling you to perform automated zero effort deployments into Kubernetes - as many times as you require per day!
This introductory level training course is designed to bring you quickly up to speed with the basic features and processes involved in a GitOps workflow. This course will present to you the key features and GitOps workflow theory.
GitOps defines a better approach to performing Continuous Delivery in the context of a Kubernetes cluster. It does so by promoting Git as the single source of truth for declarative infrastructure and workloads.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at firstname.lastname@example.org.
By completing this course, you will:
- Learn about the principles, practices, and processes that drive a GitOps workflow
- Learn how to establish GitOps to automate and synchronize cluster state with Git repos
- Learn how GitOps uses Git as its single source of truth
- Learn and understand how to configure and enable a GitOps workflow using tools such as Helm, Tiller, and Flux
This course is intended for:
- Anyone interested in learning GitOps
- Software Developers interested in the GitOps workflow
- DevOps practitioners looking to learn how to setup, manage and maintain applications using a GitOps workflow
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of Git and Git repositories
The sample GitOps project code as used within the demonstrations is located here:
If you intend to repeat the same instructions as presented within this course in your own environment, then you must FORK this repository into your own GitHub account. The reason for this, is that you need to be the owner of the repo to be able to upload and configure a new Deploy Key within the Settings area of the repo. The Settings area of a repo is only available when you are the owner of the repo.
- [Jeremy] Okay, to be able to test the Nginx Flaskapp deployment, which now exists within the Kubernetes cluster, which the Flux operator automatically performed for us earlier, we need to first expose the deployment to our local host. Remembering that the deployment has taken place within the Kubernetes cluster, which has its own internal IP container address space which can't be connected to directly from localhost by default at the moment. To change this and expose the Nginx web service to allow us to test it from localhost we have a couple of quick options.
The first option involves using the kubectl port-forward command. To setup port forwarding from localhost to a named pod using a particular port mapping. Let's try this out. I'll run the command kubectl get pods -n cloudacademy to first display the pod names. Next, I'll copy a pod name like so and then run the command kubectl port-forward with the pod name and set it up to port forward 8080 to 80 like so. I can now perform a curl request in a new terminal session to test out our deployment. I'll run the command curl -s for silent - i to show http://localhost:8080.
Excellent this has worked as per the HTTP 200 response returned in the headers, and the message in the body. We can also tell from the HTTP Server header that the response has been served from the Nginx proxy. This proves that everything was wired up correctly. A great result, considering Flux has performed the deployment for us. Now using the kubectl port-forward command is convenient and helpful but not so if the pods are being terminated and recreated.
Let's look at a better option. The second option involves exposing the deployment using a service configure with a NodePort. Let's see this in action. I'll run the command kubectl expose deployment frontend --type=NodePort --name=frontend -n cloudacademy.
The service abstraction allows us to change the pods behind it. We just need to send traffic via the service and it will distribute it across the pods that sit behind it. Okay, our new front end service has been created for us. We can see this by running the command kubestl get services -n cloudacademy Here we can see the NodePort mapping. Let's now copy it and then perform a curl request. Like so, in the terminal window. And again this has worked successfully.
Okay, I'll leave the service in place as it will come handy in the following remaining demonstrations.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, and Kubernetes (CKA, CKAD, CKS).