AWS CloudWatch is a monitoring and alerting service that integrates with most AWS services like EC2 or RDS. It can monitor system performance in near real time and generate alerts based on thresholds you set.

The number of performance counters is fixed for any particular AWS service, but their thresholds are configurable. The alerts can be sent to system administrators through a number of channels.

Although most people think of CloudWatch as a bare-bones monitoring tool with a handful of counters, it’s actually more than that. CloudWatch can work as a good log management solution for companies running their workload in AWS.

By “log management”, we mean CloudWatch:

  • Can store log data from multiple sources in a central location
  • Enforce retention policy on those logs so they are available for a specific period
  • Offers a searching facility to look inside the logs for important information
  • Can generate alerts based on metrics you define on the logs

CloudWatch logs can come from a number of sources. For example:

  • Logs generated by applications like Nginx, IIS or MongoDB 
  • Operating system logs like syslog from EC2 instances
  • Logs generated by CloudTrail events
  • Logs generated by Lambda functions

Some Basic Terms

Before going any further, let’s talk about two important concepts.

CloudWatch Logs are arranged in what’s known as Log Groups and Log Streams. Basically, a log stream represents the source of your log data. For example, Nginx error logs streaming to CloudWatch will be part of one log stream. Java logs coming from app servers will be part of another log stream, database logs would form another stream and so on. In other words, each log stream is like a channel for log data coming from a particular source.

Log groups are used to classify log streams together. A log group can have one or multiple log streams in it. Each of these streams will share the same retention policy, monitoring setting or access control permissions. For example, your “Web App” log group can have one log stream for web servers, one stream for app servers and another for database servers. You can set a retention policy of, say, two weeks for this log group, and this setting will be applied to each of the log streams.

The image below shows a log group and its log streams:

AWS CloudWatch

Amazon EC2 and AWS CloudWatch Logs

We will start our discussion with Amazon EC2 instances. There are three ways Amazon EC2-hosted applications can send their logs to CloudWatch:

  1. A script file can call AWS CLI commands to push the logs. The script file can be scheduled through an operating system job like cron
  2. A custom-written application can push the logs using AWS CloudWatch Logs SDK or API
  3. AWS CloudWatch Logs Agent or EC2Config service running in the machine can push the logs

Of these three methods, the third one is the simplest. This is a typical setup for many log monitoring systems. In this case, a software agent runs as a background service in the target EC2 instance, and automatically sends logs to CloudWatch. There are two prerequisites for this to work:

  1. The EC2 instance needs to be able to access the AWS CloudWatch service to create log groups and log streams in it and write to the log streams
  2. The EC2 instance needs to know what application it should monitor and how to handle the events logged by the application (for example, the EC2 instance needs to know the name and path to the log file and the corresponding log group / log stream names)

The first prerequisite is handled when an EC2 instance is either:

  • Launched with an IAM role that has these privileges or
  • Configured with the credentials of an AWS account that has these privileges (the account credentials are set in the agent’s configuration file)

Given that you can’t attach an IAM role to an existing EC2 instance, and it’s not a good idea to leave AWS account credentials exposed in plain text configuration files, we strongly recommend launching EC2 instances with at least a “dummy” IAM role. This role can be modified later to include CloudWatch Logs privileges.In the image below, we have created one such role and assigned permissions to its policy:

AWS CloudWatch

AWS CloudWatch

Any EC2 instance assuming this role (EC2-CludWatch_Logs) will now be able to send data to CloudWatch Logs.

The second prerequisite is handled by the EC2Config service or CloudWatch Log Agent’s configuration file. The configuration details can be modified later.

In the next two sections, we will see how Linux or Windows EC2 instances can send their logs to CloudWatch. To keep things simple, we will assume both the instances were launched with the IAM role we just created.

The Linux machine will have a MongoDB instance running and the Windows box will have a SQL Server instance running. We will see how both MongoDB and SQL Server can send their logs to CloudWatch.

Sending Logs from EC2 Linux Instances

Sending application logs from Linux EC2 instances to CloudWatch requires the CloudWatch Logs Agent to be installed on the machine. The process is fairly straightforward for systems running Amazon Linux where you need to run the following command:

This will install the agent through yum. Once installed, you need to modify two files:

  • /etc/awslogs/awscli.conf: modify this file to provide necessary AWS credentials (unless the instance was launched with an appropriate IAM role) and the region name where you want to send the log data
  • /etc/awslogs/awslogs.conf: edit this file to specify which log files you want to be streamed to CloudWatch

Once the files have been modified, you can start the service:

Installing CloudWatch Logs Agent in mainstream Linux distros like CentOS/RedHat or Ubuntu is somewhat different.  Let’s consider an EC2 instance which is running RHEL 7.2 and launched with our IAM role. Let’s assume the machine also has a vanilla installation of MongoDB 3.2. Looking at the MongoDB config file in the machine shows the default location of the log file:

If you tail the log file:

The log data will look something like this:

It’s the content of this file you want to send to CloudWatch.

As mentioned before, installing the agent in RHEL/CentOS or Ubuntu is slightly different than Amazon Linux. Here, you will have to download a Python script from AWS and run that as the installer.

Step 1. Run the following command as the root or sudo user:

This will download the script in the current directory.

Step 2. Next, change the script’s file mode for execution:

Step 3. Finally, run the Python script:

This will start the installer in a Wizard like fashion. It will install pip, then download the latest CloudWatch logs agent and prompt you for different field values:

Press Enter to skip this prompt if you launched the instance with an IAM role with sufficient permissions.

Press Enter again to skip this prompt if your instance was launched with an IAM role.

You can skip this prompt as well if you specified the region name when you called the Python script.

Press Enter again to skip this prompt.

Specify the path and filename of the MongoDB log. For a default installation, it would be /var/log/mongodb/mongod.log

Instead of accepting the default log group name suggested, you can choose to enter a meaningful name. We chose “MongoDB_Log_Group” as the log group name.

Next, enter 3 in the prompt to choose a custom name for the log stream.

In the following prompt specify the log stream name. We used “MongoDB_Log_Stream” as the stream name.

In the next prompt, enter 4 to choose a custom time-stamp format.

For MongoDB logs, the time stamp format is ISO8601-local.

Choose the first option (1) because you want the whole log file to be loaded first.

Finally, the wizard asks if you want to configure more log files.

By entering “y”, you can choose to send multiple log files from one server to different log groups and log streams. For this particular exercise we entered “N”. The wizard would then finish with a message like this:

– Configuration file successfully saved at: /var/awslogs/etc/awslogs.conf

– You can begin accessing new log events after a few moments at:<region-name>#logs:

– You can use ‘sudo service awslogs start|stop|status|restart’ to control the daemon.

– To see diagnostic information for the CloudWatch Logs Agent, see /var/log/awslogs.log

– You can rerun interactive setup using ‘sudo python ./ –region <region-name> –only-generate-config’

– You can rerun interactive setup using ‘sudo python ./ –region <region-name> –only-generate-config’

From the final messages in the wizard, you know where the config file is created (/var/awslogs/etc/awslogs.conf). If you look into this file, you will find the options chosen in the wizard have been added at the end of the file. If you think about automating the installation process, you can first create this file with appropriate details and then call the Python script. The command will be like this:

As a final step, restart the Agent service:

Looking in the CloudWatch Logs console will now show the log group and log stream created:

AWS CloudWatch

Browsing the log stream will show the log file has been copied:
AWS CloudWatch

To test, you can connect to the MongoDB instance and run some commands to create a database and add a collection. The commands and their output are shown below.

The connection would be recorded in the MongoDB log file and flow on to CloudWatch log stream:

MongoDB log entries in CloudWatch log stream


We have now made our basic introduction to AWS CloudWatch Logs. As you just saw, it’s really simple to make EC2 Linux instances send their logs to CloudWatch. In the next part of this three-part series, we will see how some other sources can also send their log data to CloudWatch. Feel free to send us your comments or question on the post if you like. By sharing our experience, we will all continue learning. If you want to try using some of what you just learned you can work on one of the Cloud Academy hands-on labs: Introduction to CloudWatch. There is a 7-day free trial. 

If you want to know more about performance monitoring with AWS CloudWatch you can read this article from Nitheesh Poojary, also published in Cloud Academy blog.

  • Azhar

    Excellent post buddy! a very important that was missing for a while and you made it a lot simpler. Cheers buddy!

  • Jason

    Not to be critical of this extremely awesomely useful article :) but if you could include the Policy Document as code that can be copied/pasted that would be helpful so the reader doesn’t have to re-type it.

    Yeah, I know — First World Problems :)

  • Nick Smart

    Is the above approach sends logs to cloudwatch in real time? if not how we can have real time monitoring of our application hosted on an EC2 instance?

    • Sadequl Hussain

      Hi Nick, Yes, the logs are sent to CloudWatch automatically by the awslogs service as they are generated by the source application. If your application creates a log file, you need to provide it’s location when you configure the service and it will monitor the log file. Whenever a new entry is added, it will be sent to CloudWatch logs. Also, bear in mind the EC2 instance running your application needs to have access to CloudWatch to create log groups, log streams etc. Hope this helps!

  • Hutger Hauer

    Hello Sadequl, I’m looking for a solution that allow me to centralize NGINX logs (in JSON format) in such way that a external application can collect and analyze the data. Using this approach I can send the logs do cloudwatch and cloudwatch can store them in a S3 bucket in such way that an external application can access the logs in the bucket and analyze them?

    • Sadequl Hussain

      Hi Hutger,

      You don’t need to store CloudWatch logs to S3 for access them programmatically. Amazon CloudWatch exposes API endpoints which you can call from a programming language and extract the log stream data.

      See these two links here:

      1. “”
      2. “”

      Using this approach, you can also write a Lambda function (in Java, Node.JS or Python) which will fire off in response to CloudWatch log events (as soon as a log event is added to the stream) and add the log event somewhere else and perform analysis on that.

      Hope this answers your question.

      • Hutger Hauer

        Hi Sadequi, thanks for your reply. I’ll check the documentation.