Prepare Local Kubernetes Cluster
Start course

If you work with Kubernetes, then GitOps is going to make your world a better place by enabling you to perform automated zero effort deployments into Kubernetes - as many times as you require per day!

This introductory level training course is designed to bring you quickly up to speed with the basic features and processes involved in a GitOps workflow. This course will present to you the key features and GitOps workflow theory. 

GitOps defines a better approach to performing Continuous Delivery in the context of a Kubernetes cluster. It does so by promoting Git as the single source of truth for declarative infrastructure and workloads.

We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at

Learning Objectives

By completing this course, you will:

  • Learn about the principles, practices, and processes that drive a GitOps workflow
  • Learn how to establish GitOps to automate and synchronize cluster state with Git repos
  • Learn how GitOps uses Git as its single source of truth
  • Learn and understand how to configure and enable a GitOps workflow using tools such as Helm, Tiller, and Flux

Intended Audience

This course is intended for:

  • Anyone interested in learning GitOps
  • Software Developers interested in the GitOps workflow
  • DevOps practitioners looking to learn how to setup, manage and maintain applications using a GitOps workflow


To get the most from this course, you should have at least:

  • A basic understanding of containers and containerisation
  • A basic understanding of Kubernetes - and container orchestration and scheduling
  • A basic understanding of software development and the software development life cycle
  • A basic understanding of Git and Git repositories

Source Code

The sample GitOps project code as used within the demonstrations is located here:

If you intend to repeat the same instructions as presented within this course in your own environment, then you must FORK this repository into your own GitHub account. The reason for this, is that you need to be the owner of the repo to be able to upload and configure a new Deploy Key within the Settings area of the repo. The Settings area of a repo is only available when you are the owner of the repo.



- [Jeremy] Okay, welcome back. From here on in, we're in demonstration mode. The end to end demonstration that I'm about to perform will show you how to setup and install a GitOps workflow in a Kubernetes cluster. The idea here is that by observing firsthand the installation, you'll be able to repeat it within your own environment. 

The demonstration will mostly involve using the tools Kubectl, Helm, and Docker. Helm will be used to install the Flux operator into a Kubernetes cluster using a pre-built and publicly available Helm chart. Kubectl will then be used to manage and manipulate resources within the Kubernetes cluster, which should in turn trigger the GitOps process to perform automatic updates and deployments. Docker will be used for two reasons. 

The first is we'll use it to actually host a single-node Kubernetes cluster. And secondly Docker will be used to rebuild and push an updated Docker image to the Docker registry, which, again, should trigger the GitOps process to resync, et cetera. Okay, to start with, I'll need access to a Kubernetes cluster. I like to do a lot of local testing and prototyping with Kubernetes, so I tend to use Docker Desktop and its inbuilt certified Kubernetes cluster. 

I already have Docker Desktop installed and running, as seen here. I'm using it on macOS. Docker Desktop is also available for Windows, so this demo should be repeatable both on macOS and Windows. Clicking on the About Docker Desktop, you can see that I'm running the stable version, which ships with Kubernetes version 1.15.5. Now by default, the Kubernetes cluster is deactivated. So you need to start it up by going to Preferences, Kubernetes, and then tick the Enable Kubernetes option. 

The first time you do this, the overall startup process will take a little bit of extra time since Docker Desktop's single-node Kubernetes implementation depends on several Docker images which need to be downloaded from the internet. You can see the specific Kubernetes Docker images that are pulled down by jumping into the terminal and running the command, docker images, pipe, grep, pipe, grep version 1.15.5, where the version number matches the Kubernetes version number presented in the About Docker Desktop pane. Now I already have Kubernetes enabled, and since I've already been using the cluster for testing and prototyping other projects, I'll simply reset it back to its original empty state. Resetting can be performed by clicking on the Reset Kubernetes Cluster button, like so. 

You can watch the current status of the cluster at the bottom of this pane. Eventually the Kubernetes status light will return green, indicating that the single-node cluster has been restarted successfully, which we can see now. Okay, I now have a fresh new Kubernetes empty single-node cluster running on my Mac. This is very cool. I can now jump over into the terminal and attempt connecting to this cluster. 

I'll run the command, kubectl config use-context docker-desktop, to ensure that I'm connecting and operating against the Docker Desktop-provisioned Kubernetes cluster. And now I'll run the command, kubectl get nodes, to hopefully view the single-node cluster, which we can see consists of a single node running version 1.15.5. 

Okay, that completes the initial Kubernetes cluster setup. In the next demo, I'll show you how to install Helm.

About the Author
Learning Paths

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).