The course is part of this learning path
If you work with Kubernetes, then GitOps is going to make your world a better place by enabling you to perform automated zero effort deployments into Kubernetes - as many times as you require per day!
This introductory level training course is designed to bring you quickly up to speed with the basic features and processes involved in a GitOps workflow. This course will present to you the key features and GitOps workflow theory.
GitOps defines a better approach to performing Continuous Delivery in the context of a Kubernetes cluster. It does so by promoting Git as the single source of truth for declarative infrastructure and workloads.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at firstname.lastname@example.org.
By completing this course, you will:
- Learn about the principles, practices, and processes that drive a GitOps workflow
- Learn how to establish GitOps to automate and synchronize cluster state with Git repos
- Learn how GitOps uses Git as its single source of truth
- Learn and understand how to configure and enable a GitOps workflow using tools such as Helm, Tiller, and Flux
This course is intended for:
- Anyone interested in learning GitOps
- Software Developers interested in the GitOps workflow
- DevOps practitioners looking to learn how to setup, manage and maintain applications using a GitOps workflow
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of Git and Git repositories
The sample GitOps project code as used within the demonstrations is located here:
- OK, welcome back. In this demonstration, I'm going to begin installing the Flux Operator into the Kubernetes cluster. For starters, let's view the Helm charts online. I'll browse to hub.helm.sh/charts. Here I'll search on Flux like so. The chart that we want to install is this one here.
Clicking on it, we can see the details, including how to install it. I'll copy this command here and then jump back into the terminal and execute it like so. Next, I'll run the command "helm repo update" to retrieve the latest chart information. OK, we're almost ready to perform the Flux install. But before I do, I need to create two new cluster namespaces. The first one will be for Flux itself, and the second one will be for the Cloud Academy demo project that Flux will auto install for us once it is up and running. Now, one last piece of house keeping before I use Helm to install Flux, I'll do a quick check in on pods within the kube-system namespace to ensure that the Tiller pod is up and running.
And it is, so we are good to go. I'll now run the command: "helm install --name flux --set git.url" with the cloudacademy GitOps demo url "--namesapce flux fluxcd/flux This will install Flux into the Kubernetes cluster into the Flux namespace and then configure it with the cloudacademy gitups demo Git repo as per the set git.url parameter. OK, kicking this off, we can see that it has launched the setup within the cluster.
We can review the current Flux deployments by running the command: "kubectl rollout status deployment flux -n" for namespace "flux". The Flux deployment takes a little bit of time to complete. And when it does, it will inform us, which it just has. Next, we can examine the pods that Flux uses by running the command: "kubectl get pods -n flux". Next, I'll take a quick look inside the cloudacademy namespace and see if any pods have been launched.
The expectation here is that this should still be empty, as it is. The reason for this still being empty is that we have yet to permit the Flux operator to read and write to our cloudacademy gitops demo Git repo that was seated earlier into Flux when we set up the install. To do this, we need to extract the SSH public key created and generated by Flux at install time. We can do this by executing the following command like so. We then copy this portion here. And then jump over into the Cloud Academy GitOps Demo Git repo on GitHub.
I then need to click on settings and then on "Deploy keys". Clicking on the "Add deploy key" button, I'll give it the title "GitOps" and then I'll paste the key into the "Key" filler. Importantly, I need to tick the "Allow write access" check box, and then click the "Add key" button like so. OK, quickly jumping back into the terminal, I'll start up a tail on the Flux deployment like so. "Kubectl -n flux logs deployment/flux --follow". This will allow me to observe the Flux activity.
Now, by default the Flux operator is configured to periodically check in on its configured Git repo every five minutes. I'll wait until the next check is performed. Here we can see that the Flux operator is kicking in and performing its checks. This looks good, as we can now see it has discovered the Kubernetes declarative resources within our GitHub repo. It appears to be rolling them out into the cluster. Let's now exit this title and use the following command: "kubectl rollout status deployment frontend -n" for namespace "cloudacademy" to examine the deployment taking place within the cloudacademy namespace.
OK. The automatic deployment kicked off by the Flux operator has completed successfully. This is very cool! Let's take a closer look at the pods and the cloudacademy namespace by running the command: "kubectl get pods -n cloudacademy". Here, indeed, we can see that the Cloud Academy frontend deployment has successfully launched a pod.
OK, that completes this demo. In the next demo, I'll take a closer look at what we have just deployed by examining the resources declared within the cloudacademy gitops demo repo.
About the Author
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.