The course is part of this learning path
In this course, we'll run through a quick demo of the Prometheus exporter for an Apache Web Server. This course specifically covers the application as it relates to Kubernetes, ranging from deployments to services, and how to inject a Prometheus configuration into a Prometheus container on startup. For this course, you should feel comfortable operating kubectl commands and how to interact with Kubernetes services and pods. All of the required YAML configurations, kubernetes commands, and instructions are provided In the official repo below.
If you have any feedback relating to this course, please let us know at email@example.com.
- Configure an Apache Webserver exporter with Prometheus in Kubernetes
- DevOps, SRE, or Cloud Engineers
- Kubernetes developers
- Anyone looking to implement an official Prometheus exporter into a Prometheus environment
- Kubernetes experience including kubectl
- Familiarity with Prometheus
- Comfortable poking around in a web GUI
- Terminal experience
The GitHub repository for this course is available here: https://github.com/cloudacademy/Apache-Prometheus-Exporter
Before we get started on this course, I do wanna say that there was an entire Repo that I've created for you, that you can clone and follow along should you choose to. It has all of the YAML files, Docker files, and a list of instructions to follow along and create your own environment for this course.
Okay before we jump into the terminal, we should talk about how does the Apache Exporter work? What are its defaults? What is it written in? Are there flags? So let's jump into that now. And let's start off with, it first exports the Apache mod status.
So you have to have it enabled for your Apache server via HTTP. It exposes those metrics on port 9117 with an an URI of slash metrics as its default. It's written in Go, it has several passable flags.
It defaults to local host for Metrics gathering, and it's tailored best for a one-to-one Apache instance mapping. So, a one-to-one parity. You don't wanna have one Apache Exporter gathering multiple Apache servers, it's just not ideal.
Talking about flags we have our first one, which is insecure. And this is going to be ignoring our server certificate if using https. From there we have log level values for our Apache Exporter. So, we default to informational but these will typically be in Go.
Next, we have our scrape URI string. This URI is for our Apache sub-status page. Where are those Apache metrics going to be collected from, from the Apache server? After that we have our port string. So where are we going to expose those metrics to be consumed from?
Following that, we have our end point string. Which URI is Prometheus going to be scraping those metrics from? And lastly, we have our Apache Exporter version. We're going to be using all the defaults in this demo but feel free to play around after we're all done if you want to test things out on your own. Let's go jump into the terminal.
Okay, we're in the terminal. And the first thing that we're going to be doing is be going to creating the environment that supports our Prometheus environment, as well as our deployment. So let's run those two commands now.
You can see several services have been created, as well as a namespace and the deployment. This is the default namespace where we're going to be doing all of our work. This is great. But what if we wanted to navigate to the minikube IP that's associated with this service? Well, luckily we can just simply type in minikube service, the name of the service and then the namespace dash n Prometheus. This will open up our default web browser with our Prometheus server instance. Let's do that now.
Voila, we have the new Prometheus UI. I'm actually a fan of the old Prometheus UI so I'm gonna jump over there now and see if we have some metrics available. Great. It looks like we have the default Prometheus configuration set up. Now, as a matter of fact, we do.
So Prometheus is scripting itself right now but what if we wanted to be scraping that Apache Exporter that's also included in that deployment? Well, we'll have jump back over to the terminal and edit a config file that we're going to inject into our new deployment called deployment version two. So let's go ahead and go do that now.
Included in the Repo is a config file that we're gonna be using to populate the config for Prometheus. But first, we need to grab the pod's IP as part of the deployment, such that we know where to direct it to scrape metrics from. So let's grab that metric now with a kubectl get pod output wide command.
Here you see, we have 172.17.0.6. That's our IP for this deployment within the cluster. So we're going to be telling Prometheus that it needs to scrape that IP on port 9117. This is where our Apache metrics exporter is exporting to. Now we can delete the old deployment and create the new deployment.
Keep in mind that the old deployment is still occupying that pod's IP so we need to give it some time to delete before we create the new one, so that the IP address that we've provided in the config is the same IP address that the new deployment is going to take. Awesome.
Since our deployment is now deleted we can kick off our new one that has the correct volumes and volume mounts. And let's verify that it has the exact same IP address that we had earlier. Awesome. So now that we have the correct IP address with the correct pod and deployment we can now should be able to see our Apache metrics on our Prometheus server. So let's jump into our service and check that out now.
If you're using minikube, remember you can just type minikube but service followed by the service name and then dash n for Prometheus for the namespace. This will automatically open it up for us in our default web browser. Awesome.
So if I just simply type in Apache I should start seeing some metrics. Yep, there they are. And if I just do a quick query I should get that IP address that we saw earlier followed by the job specified in the YAML configuration that we do ConfigMap.
Navigating around the web view is great but what if we wanted to curl the API to see the job that we've created such that we know that it's scraping correctly? Well, it's as simple as running a curl against the URL provided above, followed by the node port number, followed by API version one label, job, and then values. And we should see Apache. Awesome. That's the only job we have so that's it.
Curling unexposed services is easy, but what if we wanted to get the active Apache metrics that are on the Apache Exporter. Well, we'd have to run a pod inside the cluster. So here, this command is running the pod curl with the image, curl image/curl with an interactive terminal removing, never restarting, and it's passing in the argument of the URL for the Apache Exporter with the end URI of metrics.
Finally, I piped it to more because we're going to be getting a lot of metrics. So let's run this command now and see what we get.
Here we can see the active metrics for the Apache Exporter. You'll notice that we'll start seeing Apache metrics as well as Go metrics. Because the Apache Exporter is also exposing its Go metrics for the service itself. So we've curled our Prometheus API as well as our Apache Exporters and URI for its metrics. But what if we wanted to see some actual metrics for our Apache web server?
Well, that's what this command here is going to be doing. We're gonna be using an Apache benchmark tester image and we're going to be running it against thousand requests in concurrency of 20, followed by the end URI from our minikube service and its node port. So let's run that now and see what we get.
Now I can't go too deep into the metrics that are provided by the Apache benchmark image but what we can do, is go check out the Prometheus web servers GUI and see those metrics in real time. So let's jump over there now and see what it did to our metrics.
We start typing in Apache CPU load we should see the metrics for the CPU load. And I'm bringing a balance over to graph and scale this down to about within the last five minutes. And just like we have an increase of well over one and a half times on our CPU. If you wanna experiment and increase the number of requests or the concurrency of those requests, feel free to modify the previous command's numbers to do just that.
We're gonna jump over and clean up our environment but you're more than welcome to keep it up and running and play around. So that does it for this course. I hope you've enjoyed it. And as always, don't forget to clean up your environment when you're done. It may take some time for the services to completely delete but rest assured they will be deleted within a minute or so.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.