The course is part of this learning path
If you work with Kubernetes, then GitOps is going to make your world a better place by enabling you to perform automated zero effort deployments into Kubernetes - as many times as you require per day!
This introductory level training course is designed to bring you quickly up to speed with the basic features and processes involved in a GitOps workflow. This course will present to you the key features and GitOps workflow theory.
GitOps defines a better approach to performing Continuous Delivery in the context of a Kubernetes cluster. It does so by promoting Git as the single source of truth for declarative infrastructure and workloads.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at email@example.com.
By completing this course, you will:
- Learn about the principles, practices, and processes that drive a GitOps workflow
- Learn how to establish GitOps to automate and synchronize cluster state with Git repos
- Learn how GitOps uses Git as its single source of truth
- Learn and understand how to configure and enable a GitOps workflow using tools such as Helm, Tiller, and Flux
This course is intended for:
- Anyone interested in learning GitOps
- Software Developers interested in the GitOps workflow
- DevOps practitioners looking to learn how to setup, manage and maintain applications using a GitOps workflow
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of Git and Git repositories
The sample GitOps project code as used within the demonstrations is located here:
- [Instructor] Okay, in the previous demo, we saw how Flux reacted and synced the cluster state back to the change made manually within the deployment.yaml file. This time, I'm going to make a change to the FlaskApp docker image, rebuild it, and push a newer version of it into the docker registry. Again, Flux should detect this change and react accordingly. Let's try it out. Back within Visual Studio Code, I'll close the deployment.yaml file, and then open up the FlaskApp main.py file.
Here, I'll simply uppercase the HelloWorld string like so, and then save the file. Next, I'll jump back into the terminal and perform the following command to see a list of the docker builds previously performed for the FlaskApp container. Here, we can see what the last used tag was, and what the next tag needs to be. I'll now run the following docker build command to create an updated FlaskApp docker image: docker build -t for tag cloudacademydevops/flaskapp:develop-v1.7.0. With a dot for the current directory at the end of it. Unfortunately, this just failed because I wasn't in the directory containing the FlaskApp Dockerfile.
Let me list the contents of the current directory. From here, I'll navigate into the correct FlaskApp directory and perform another directory listing to confirm that this one does, indeed, contain the Dockerfile, which it does. And now, I'll re-run the previous docker build command like so. Okay, this looks better. The FlaskApp docker image is now being rebuilt. Next, I'll push the updated FlaskApp docker image into the cloudacademydevops docker repo by running the command: docker push cloudacademydevops/flaskapp:develop-v1.7.0 Okay, that will take a few seconds to complete. Once it has, I'll restart the previous watch to curl again for updates, like so.
And excellent, we can see that Flux has already detected the newer docker image, and has redeployed it into the cluster, keeping the cluster in sync with the resources declared, and managed within the source of truth Git repo. In terms of the timings, we were just lucky that the change was applied immediately before Flux re-scanned the registry for updates. Let's repeat this entire workflow again. This time, I'll update the message from Hello World to Blah, like so. Perform a second image rebuild, and then push the resulting image using the version tag develop-v1.8.0 and then restart the watch command again.
Eventually, we should see the message in the curl response body automatically change from containing Hello World to containing Blah. When this occurs, it validates that Flux is indeed triggering redeployments within the cluster, making sure that the cluster is in sync with the resources currently declared within the source of truth Git repo. This is very cool. Now, what this must mean is that Flux is automatically updating the respective deployment.yaml file held within the CloudAcademy Gitops Demo repo.
Let's go and take a look, and confirm that this is, indeed, the case. If we click on the commits link, we can see the last two commits, and who performed them. In this case, they were performed by Weave Flux, the flux operator. Clicking on the first one, we can see within the diff view that the k8s deployment.yaml file has been updated. Here, the FlaskApp image tag has been synchronized to use the latest detected tag at the time for the FlaskApp image.
This corresponds to when the FlaskApp image was rebuilt and pushed with the Hello World message capitalized. Clicking on the second commit, again we can see within the diff view that the k8s deployment.yaml file has been updated. Again, the FlaskApp image tag has been synchronized to use the latest detected tag at the time for the FlaskApp image. This corresponds to when the FlaskApp image was rebuilt and pushed with the Hello World capitalized message, being replaced with the Blah message. Finally, we can see that both commits were undertaken and performed automatically by the Flux operator as per the committer shown here.
Okay, that completes this demonstration. In the next demonstration, I'll show you how making a manual change directly within the cluster itself gets reset when the Flux operator resyncs with the Git repo.
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.