Skip to main content

Centralized Log Management with AWS CloudWatch: Part 3 of 3

Explore the power of centralized AWS CloudWatch logs

This is the third and final installment of our coverage on AWS CloudWatch Logs. In the first two parts, we saw how different sources of logs can be redirected to CloudWatch. Here, we will see what we can do with those logs once they are centralized.

Configuring Log Retention

Many businesses and government agencies need to keep their application logs available for a specified period of time irrespective of their business value. This is commonly necessary for organizational compliance with government legislations or industry regulations/practices.
CloudWatch logs can be retained either indefinitely, or for up to ten years, which for most purposes, is a safe retention window.  The retention is set at the log group level. This means all log streams under a log group will have the same retention setting.
By default, log groups are created with a “Never Expire” setting. Clicking on the link will open a dialog box where you can set the retention. As you can see below, the granularity starts from 1 day and goes up to ten years:
Creating AWS CloudWatch Log Retention Policy
Once set, any log data older than the retention period will be deleted automatically.

Creating Metric Filters, Alerts, and Dashboards

CloudWatch Logs would not be of much use if they were only a place for capturing and storing data. The real power of log management comes when you find critical clues, visualize trends, or receive proactive alerts from thousands of lines of logged events.
Searching for a particular piece of information from a large log file can be a daunting task. This is especially true for applications that generate logs in JSON or other custom formats. Although there are both free and commercial log management solutions  in the market, AWS CloudWatch offers some really comparable benefits in this area. These include:

  • Ability to search within a date and time range
  • Ability to search through log files with space separated fields or JSON formatted events
  • Ability to search through multiple fields within a JSON formatted event
  • Simplified search technique with text patterns, comparison, and logical operators

We will start with metric filters and see how they can be used to search within log data. A metric filter is basically a search criteria where the data returned is published in some custom metric.
If you have worked with CloudWatch before, you may remember there are out-of-the-box metrics for services like EC2, EBS or RDS. With metric filters, we can create our own metric for a log group.
In our test case, we decided to search for the text “Error” in our SQL Server log files which are being sent to CloudWatch. For more information on how this was done, refer to the second part of this series. Metric filter search strings are case sensitive, so “error” and “ERROR” would return two different search results. Similarly, the position of the searched string can be anywhere within a logged event. We can’t tell CloudWatch to search at the beginning, middle or end of a line.
To generate errors, we created a SQL Server Agent job that would simulate an error condition every 15 seconds, and log a message.
The following 5 steps show how to create a metric filter.
Step 1.Select the SQL Server log group in the CloudWatch Logs console and click the “Create Metric Filter” button:
Creating Metric Filter on aws CloudWatch Logs
Step 2. In the next screen:

  • Provide the search string (in our case, it was “Error”) in the “Filter Pattern” field
  • Select the log stream name from the “Select Log Data to Test” dropdown list. In our case, the log stream name was SQL_Server_Logs”
  • Click on the “Assign Metric” button

Creating Filter Pattern for aws CloudWatch Metric Filter
Step 3. In the following screen:

  • Provide a name for the filter (we chose the default name provided)
  • Provide a namespace for the metric
  • Provide a name for the metric
  • Provide a value for the metric value field. In our case, it was 1. With this value, every time there is a match for the searched string (metric filter), the metric will be incremented by 1. The metric will show the running number of occurrences for that particular text.

Populating aws CloudWatch Metric Filter Properties
Once you click on the “Create Filter” button, the metric will be created. This is visible from the log group’s properties:
Metric Filter Link in aws CloudWatch Log Group
Clicking on the “1 filter” link will take you to the metric’s page:
Metric Filter Page AWS CloudWatchFrom here,  clicking on the metric’s name (“SQL_Server_Errors” in our example) takes you to the custom metric’s page. When you select the metric, its graph is shown in the lower half of the screen just like any other CloudWatch metric:
CloudWatch Custom Metric Filter and Graph AWS CloudWatch
From the graph in this particular image, you can see there have been some instances of the phrase “Error” in the SQL Server logs. You can create an alarm for this metric from the graph. These alerts will send you an e-mail when the number of occurrences exceeds a specified threshold.
You can also create dashboards from metrics you create on your log groups. These dashboards essentially show the same type of graph we just saw. If you think about it, CloudWatch log management now offers a whole new way of systems monitoring where you can have:

  • Separate dashboards for each of your application streams (databases, web servers, app servers, middleware and so on)
  • Alerts configured for critical errors. Whenever there is an error trapped in the original log file, it will be streamed to CloudWatch and an alert will fire. The alert can then send you a text message or an e-mail.

The following 5 steps show how we created a dashboard for plotting the trend of errors in our SQL Server log.
Step 1. Click on the “Dashboards” link from the CloudWatch console. This will open up an empty page. From here, click on the “Create Dashboard” button, then provide a name for the dashboard.
AWS CloudWatch Dashboard Screen
Providing a name for CloudWatch Dashboard
Step 2. Once the empty dashboard is created, click on its name from the CloudWatch Dashboards console. In the next screen, click on the “Add widget” button with the dashboard selected:
Adding Widget to AWS CloudWatch Dashboard
Step 3. A dialog box appears which gives you two options: one to add a metric graph, the other to add a text widget. With the metric graph selected, click on the “Configure” button:
Choosing Widget Type for AWS CloudWatch Dashboard
Step 4. From the “Add Graph” screen, choose the namespace from the custom metrics drop-down list. In our case, it was “LogMetrics.”
Adding Graph to AWS CloudWatch Dashboard
Step 5. In the next screen, select the metric name from the drop-down list and click on the “Add widget” button. This will create the dashboard as shown below. It can be resized and its time axis can be scaled for different intervals. Once you are happy with how it looks, save the dashboard.
AWS CloudWatch Dashboard

Other Use Cases

It’s also possible to use ClodWatch Logs as a data source for other AWS services like S3, Lambda or Elastisearch. Sometimes you may want to export data to S3 for further analysis with a separate tool, or load data to a Big Data workflow. The export process is fairly simple: just select the log group from the CloudWatch Logs console and select the “Export data to Amazon S3” option from the “Action” menu:
The export process is fairly simple. Select the log group from the CloudWatch Logs console and select the “Export data to Amazon S3” option from the “Action” menu:
Exporting AWS CloudWatch Logs
Exporting AWS CloudWatch Logs to S3
You can use the same menu to export logs to AWS Lambda or Elasticsearch. These services can further analyze or process the data or pass it on to other downstream systems. For example, a Lambda function can subscribe to an AWS CloudWatch log group and send its data to a third-party log manager whenever a new record is added.

Conclusion

AWS CloudWatch has come a long way since its inception and CloudWatch Logs will probably only improve in future releases. Despite the obvious benefits to AWS customers, there are some limitations though:

  • CloudWatch Logs don’t have rich search functionality and can’t like drill down on one or more searched fields.
  • There is no way to correlate events from multiple log streams. Searching an Nginx log stream may show that a web server became unresponsive, at a certain date and time, but the log does not show when this happened because the database server went offline at the same time. A system administrator still has to make manual investigations when troubleshooting a situation like this.
  • There is no integration with some key AWS services like RDS, Redshift, or CloudFormation. Logs from these services remain accessible only from their own consoles. Logs from recently added services like AWS Code Commit, or EC2 Container Service are also not sent to CloudWatch.
  • It’s time-consuming to create trend analysis graphs from raw log data. You must manually create metric filters on each important phrase or term, and create individual graphs from those metrics.
  • There is no way to break down the type of messages in a log stream, and arrange the log entries by different fields such as severity, client IP, or hostname. A pie chart showing such a breakdown would be normal for most commercial solutions.

Many organizations serious about log management will probably build their own syslog server, or use a third-party solution, like Loggly or Splunk. However, it’s still possible to continue using AWS CloudWatch Logs by integrating it with a third-party tool. Most of these solutions already have a CloudWatch connector that can be used to ingest logs from it. For companies or engineering teams still not using a centralized log manager, CloudWatch offers a viable, low-cost alternative.
Cloud Academy offers a hands-on lab Introduction to CloudWatch that may be useful in applying many of the concepts I have covered. Cloud Academy offers a 7-day free trial so that you may explore courses, labs, quizzes and learning paths. It is worth checking out regularly as the resources are expanding on a weekly basis. We encourage you to review the previous two posts of this series if you have not already and hope you would gain valuable insight from this multi-part coverage. Your comments and feedback are welcome.

Written by

Sadequl Hussain is an IT pro based in Sydney, Australia. He comes from a strong database administration backround and has more than 15 years of experience in development, database management, training, and technical writing. Sadequl also holds a number of vendor certifications, including one from AWS. He loves working with cloud technologies, NoSQL / Big Data databases, automation toolsets, open source technologies and Windows / Linux system administration. When he is not doing any of these, Sadequl loves to spend time with his young family.

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless