In this tech talk, you'll follow along as one of our lab developers, Andy Burchill, walks you through the ArgoCD service, a GitOps continuous delivery tool for Kubernetes. We'll start off by discussing what ArgoCD is and how it works, before showing you a demo of the tool in action. We'll round off by looking at the pros and cons of ArgoCD and things that you should take into consideration before using it.
If you have any feedback relating to this course, please contact us at firstname.lastname@example.org.
- Understand the fundamentals of ArgoCD
- Learn what it is and when to use it
- Get a practical understanding of the service
- Understand its pros and cons
This course is intended for IT professionals who are interested in using ArgoCD for their deployments.
To get the most out of this course, you should already have a good understanding of GitOps and Kubernetes.
Right, so.... This is today's agenda. We're gonna go through what it is, how it works, and we'll give you the demo and then some conclusions and this is the mascot for ArgoCD, it's an octopus. Okay, so who created ArgoCD and why? It was created by a company called Intuit and they are a well-known company that specialize in financial software. I realized when I was reading up on this, they used to make a piece of software called Quicken which I remember from the nineties. And it was a very popular personal finance software. And I think they've actually retired it now. They no longer make anymore. And the main software that they push for that use case is Mint which is like a SaaS piece of software. I just thought that was interesting, that Quicken is kind of a name that I know quite well and it the long resists.
So yeah, they, ArgoCD is a part of the suite of tools all focused on managing stuff in Kubernetes. And they were an early adopter. Prior to using ArgoCD they were using Spinnaker and weren't happy with it. And they have quite strong opinions about configuration data and deployment data. And that it should be declarative. They're also big on automation and auditing but they still want everything to be easy to understand and easy to reason about. And that's kind of where ArgoCD came from. This is just a selection, there's loads more, but I tried to capture some of the bigger names that are using it.
So you've got Google, and GitHub, and Alibaba, Datadog, HSBC, IBM. So yeah, there's lots and lots of people that are apparently using it. So this diagram here is from the documentation. I always thought this is a little bit difficult to understand for someone who's completely new to ArgoCD and GitOps, but essentially there's just four main components really to any GitOps system. And that is, you've got a cluster, a Kubernetes cluster, which you're deploying to. You've got the ArgoCD service and application. You've got the Git repository and webhooks to tie it all together.
In this case, I'm gonna talk a little bit more about where these all run and how this is structured. So this is the lifecycle. I've done a very simple description of just deploying an application. And we'll talk more about other life cycle stuff in a minute. But one of the key things to understand is that the output of your CI system should be deployable Kubernetes config. And that should be stored in a Git repo that's network accessible. And essentially you can think of that as the interface to ArgoCD. If your CI system produces that then you can deploy it with ArgoCD.
This Kubernetes config, must define at least one ArgoCD application custom resource definition. So for those that aren't familiar with Kubernetes, I don't know there are but, a custom resource definition is basically a mechanism to allow you to define your own resources in Kubernetes. So if you've ever used Kubernetes, and created a deployment or created a pod, those are resources that are built in. And a custom resource definition, usually refer to as a CRD, is a resource definition that you can create yourself. So not one that's built into Kubernetes. And ArgoCD provides one called Application, actually provide some others as well. but Application is the main one that you're gonna use if you use ArgoCD.
When the config is added to the Git repo ArgoCD receives a webhook request. So that's, obviously we've use webhooks with GitHub, so it's essentially that same thing there. I should also say this point, it doesn't have to, you don't have to use webhooks. You can configure ArgoCD just to periodically pull a remote Git repository. And the reason why you might not want to do that is 'cause ArgoCD is all about speeding up your deployments and development time. So developers can get their software into a working environment and potentially all the way through dev, test, and product as quickly as possible.
So the point of the webhook is just to speed that process up. When ArgoCD gets the webhook, it synchronizes with the Git repo. And essentially, at high level, this is just to Git pull. And when it has the latest config it will compare what's currently running with what should be running and then it will try and make that happen. So it'll update any applications that have changed in the Kubernetes cluster. ArgoCD supports pre and post sync hooks.
This is important because when you're using something like Helm, for example, Helm relies on, I should probably explain, like Helm, if you haven't used it, Helm has some state features. And the typical way to use Helm is to use to Helm CLI to install, so you do Helm install some config. And at that point Helm will run any Helm specific hooks that are defined. That won't work with ArgoCD, because ArgoCD is designed just to take plain config and deploy it.
So when you're deploying something with Helm, through ArgoCD, what it's actually doing is just a Helm template and then applying it to the cluster. So you wouldn't be able to use any Helm hooks at that point. So in order to support hook functionality, ArgoCD has its own pre and post sync hooks. And so things you might want to do in hooks are things like setting up configs, setting up config maps, notifications, you know, about to start deployment, deployments finished, that kind of thing.
Helm chart hooks are very popular. If you look at the Helm charts, the stable ones, they're use quite a lot. And yeah. So this one's kind of obvious, but it's worth pointing out, any change in the Kubernetes config will cause an application to redeploy. Obviously that's kind of the point.
So there are certain things you want to cause a redeploy, in particular docker image tag, for example. So if you built a new image, the tag gonna change, and you want that to be redeployed. Likewise, if you've changed the version number of the Helm Chart or some other config you obviously want that to be redeployed. But there may be things you don't want. So for example, if you have any dates, timestamps, or random strings in your config that will cause a redeploy. And this sounds kind of obvious, but in my experience this is harder than it sounds. Although, obviously, the ideal is to have very clean, declarative config, that doesn't include anything random, basically. So yeah, just something to be aware of.
All right, so let's move on to the next slide, okay. So just continuing on from that if you want to undeploy you can just remove the application from the Git repository. This can be configured, there's different ways to handle the removal of an application. So there's definitely some configuration options there, but if you were doing this sort of Git ops in the most pure way, then you just undeploy by removing from the Git repo. The Git repo is a source of truth for how things should be deployed.
The ArgoCD UI, or metrics, exposes Prometheus metrics, so sort of standard Kubernetes monitoring. Metrics and the UI are the source of truth for the actual deployment state. So obviously it can take time for a application to deploy. So just because you've added something to the Git repo doesn't mean that's going to be reflected instantaneously. That's worth pointing out. The application custom resource can point to another application custom resource and that one can point to another one.
So I don't know if anyone remembers the yo. meme from like 10 or 20 years ago, which was always referenced whenever people were talking about recursion. But basically what this means is that you can represent your systems applications in a hierarchy. And I think this is really important, really useful. It made me think of stories that I've heard about people running hundreds, up to nearly like a thousand microservices in a Kubernetes cluster. In that kind of system you really need to be able to route applications. So logically group them, so that you can just work with them in your head better. And one way to do that is obviously to use the hierarchy.
So, yeah. So I will talk a little bit more about this when we go and see the demo, I'll try and share it to you. So yeah, ArgoCD, this is quite important as well, ArgoCD can be configured to access other Kubernetes clusters other than the one that it's running on. So it has a sort of project abstraction and you can have multiple projects set up in a ArgoCD server and each project can target a different external Kubernetes cluster.
Obviously, there's security considerations there. So each project has to have the credentials to access the server. And there's a question here about whether you'd, how many Kubernetes clusters should be run. Should you run ArgoCD in the same cluster that you are deploying to? Or should you have a separate cluster, for say non-production workloads?
So things like CI, things like CD. Yeah, so it's the sort of thing where people can have strong opinions. I can see it both ways. But my preference for this kind of thing is to have internal tools running in a separate system. So I'd probably favor having a separate Kubernetes cluster to run ArgoCD and other stuff.
I should say I've used ArgoCD before at a previous job. And it's actually changed a little bit since I looked at it. It used to have some deployment methods, custom deployment methods built in. But they since moved these out into a separate tool. So there's this thing called ArgoRollouts which is a progressive delivery controller for Kubernetes. And that supports Blue/Green and Canary But a very, you can configure them in a really detailed way.
Andrew is a Labs Developer with previous experience in the Internet Service Provider, Audio Streaming, and CryptoCurrency industries. He has also been a DevOps Engineer and enjoys working with CI/CD and Kubernetes.
He holds the AWS Certified Developer - Associate and AWS Certified Sysops Administrator - Associate certifications.