If you work with Kubernetes, then GitOps is going to make your world a better place by enabling you to perform automated zero effort deployments into Kubernetes - as many times as you require per day!
This introductory level training course is designed to bring you quickly up to speed with the basic features and processes involved in a GitOps workflow. This course will present to you the key features and GitOps workflow theory.
GitOps defines a better approach to performing Continuous Delivery in the context of a Kubernetes cluster. It does so by promoting Git as the single source of truth for declarative infrastructure and workloads.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at email@example.com.
By completing this course, you will:
- Learn about the principles, practices, and processes that drive a GitOps workflow
- Learn how to establish GitOps to automate and synchronize cluster state with Git repos
- Learn how GitOps uses Git as its single source of truth
- Learn and understand how to configure and enable a GitOps workflow using tools such as Helm, Tiller, and Flux
This course is intended for:
- Anyone interested in learning GitOps
- Software Developers interested in the GitOps workflow
- DevOps practitioners looking to learn how to setup, manage and maintain applications using a GitOps workflow
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of Git and Git repositories
The sample GitOps project code as used within the demonstrations is located here:
If you intend to repeat the same instructions as presented within this course in your own environment, then you must FORK this repository into your own GitHub account. The reason for this, is that you need to be the owner of the repo to be able to upload and configure a new Deploy Key within the Settings area of the repo. The Settings area of a repo is only available when you are the owner of the repo.
- [Instructor] Okay, welcome back. In the previous demo I completed the installation of the Flux operator and configured it to use the CloudAcademy Gitops Demo Github repo. The flux operator, once given the permissions to read from this repo, went ahead and automatically deployed the resources within it for us into the Kubernetes cluster. It's now worth pausing for a moment to review the design or structure of the resources declared within the CloudAcademy Gitops Demo Github repo.
This will allow us to understand and appreciate what has just been deployed into the cloudacademy namespace within the cluster. Okay, navigating into the GitOps repo, I'll jump into the Flask app directory first. Here we can see it consists of the following two files. Opening up the main Python file, we can see that it implements a simple Python Flask app. The Flask app reads in two environmental variables, app_name and app_version, and then creates a response that concatenates hello world with the respective values assigned to the previous two environment variables, nice and simple.
Navigating into the Dockerfile, we can see that this provides the build instructions that docker will use to build and package a docker image based off the Python 3 base image. The dockerfile simply copies in the previous displayed main Python file. The Dockerfile sets default values for the app_name and app_version environment variables. Finally, the dockerfile provides a default startup command, to start up the Flask app and bind it to listen for traffic originating from 127.0.0.1 only.
The reason for using 127.0.0.1 will become clear soon. Okay, jumping back into the Kubernetes directory, we can now see that it has the following two files, a deployment.yaml file and a nginx.config.yaml file. Let's first jump into the nginx.config.yaml file. This file is used to declare a ConfigMap resource named nginx-conf. This config map resource contains a basic Nginx configuration setup. You'll see that it is configured to proxy incoming requests downstream to localhost port 5000. This will be a container running the Flask app that we previously just reviewed.
Jumping now into the deployment.yaml file, we can see that it is used to create a deployment resource. The deployment is configured to roll out one replica consisting of a single pod containing two containers. The first container is an Nginx web server, which listens on port 80, and mounts the previously shown configmap, meaning that it will be configured to proxy incoming HTTP port 80 traffic to? Yes, you guessed it, the second Flask app-based container, which is configured to listen on port 5000.
The Flask app-based container also explicitly reconfigures the app_name and app_version environment variables. Now one final but important piece of configuration to draw your attention to, is configured on lines seven, eight, and nine. Here you'll see two flux-based annotations are provided. The annotation on line nine tells flux to automatically edit, commit, and push this file anytime that it detects a newer version of any docker image used elsewhere within this file, and for which has a tag that matches the tag matching scheme declared in the previous annotation.
Therefore, in this scenario if you look at line 35, you'll see that the Flask app is configured to use a docker image that has a tag that matches the glob pattern declared in the annotation on line eight. This means that if Flux discovers a newer docker image in the docker registry for the tag specified, it will edit, commit, and update the deployment.yaml file, and then resync the cluster to reflect the change.
Okay, so now that you know what is configured and declared within the repo, let's now move on and perform some quick HTTP curl requests against the deployment.
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.