Monitor Like a DevOps Pro: Build A Log Aggregation System in AWS
698 students completed the lab in ~1h:20m
Total available time: 2h:0m
330+ students rated this lab!
Modern cloud environments are increasingly complex distributed systems with numerous software components. The challenge of maintaining moving parts and tracking changes in your AWS systems continues growing and there are solutions.
- How can we understand, at a high level, what is happening in our cloud?
- Do we have a way to track usage trends over time?
- Can we debug any issues that might arise?
- Can we search through our logs without combing through files on many disks?
Yes, we can! We use a sophisticated tool called a Log Aggregation System, which gathers operational information and logs from across our whole cloud. The Log Aggregation System is an advanced DevOps technique that enables us to quickly search our logs and graph for any trends arising from structured logs.
In this Lab, we will create a distributed, scalable Log Aggregation system within AWS, running on AWS Elasticsearch Service. This Log Aggregation System will be able to ingest as much of our CloudWatch Logs Streams Events as we want, generated from AWS EC2 Instances, Lambdas, Databases, and anything else we want to submit Log Events from.
Follow these steps to learn by building helpful cloud resources
Log In to the Amazon Web Service Console
Your first step to start the laboratory experience
Navigate to Your Cloud's Lambda
Before you launch your Log Aggregation System, you will be using a simple AWS Lambda function to generate logs that you want to aggregate. In this step, we navigate to that Lambda.
Make Some Logs
Now that we have navigated to the Lambda we will use to generate CloudWatch Logs, we need to run some test invocations to generate said logs.
See Logs Manually
After generating some CloudWatch Logs events, we will take a moment in this step to review the data and interfaces available to use using CloudWatch Logs without an ELK Stack performing aggregation.
Launch the Elasticsearch Domain
Now that we have spent time reviewing how to use CloudWatch Logs in the most basic manner, and the model that CloudWatch Logs uses, we can move forward by launching the AWS Elasticsearch Service Domain which is the main component of our ELK Stack.
Send CloudWatch Logs to Elasticsearch
After creating a running AWS Elasticsearch Service Domain for the Log Aggregation System, we need to publish the logs from our CloudWatch Logs Group recording Lambda log events into the system.
Event Discovery and Search
In this step, we will produce some more test events, which will now be aggregated into our ELK Stack we just launched. Then, we will try using the Discovery and search functionality in the system.
Visualize Aggregated Events
After trying the Discovery functionality of the ELK Stack, we will build a stacked area chart of request types to our Lambda API over time, using Kibana's Visualization functionality.
Since we now know how to create Visualizations in Kibana on our ELK Stack, in this step, we add the previous time-series visualization to a reusable Dashboard in Kibana, so we can quickly reference back to this Visualization any time we need to, without needing to reconfigure.