DVA-C02 Introduction
Amazon CloudWatch
AWS CloudTrail
AWS CloudFormation
AWS Logging
Options for Operating Programmatically with AWS
Using the AWS Command Line Interface
AWS Systems Manager
AWS Secrets Manager
AWS AppConfig
AWS Cloud Development Kit (CDK)
The course is part of this learning path
This course provides detail on the AWS Management & Governance services relevant to the AWS Certified Developer - Associate exam.
Want more? Try a lab playground or do a Lab Challenge!
Learning Objectives
- Learn how AWS AppConfig can reduce errors in configuration changes and prevent application downtime
- Understand how the AWS Cloud Development Kit (CDK) can be used to model and provision application resources using common programming languages
- Get a high-level understanding of Amazon CloudWatch
- Learn about the features and use cases of the service
- Create your own CloudWatch dashboard to monitor the items that are important to you
- Understand how CloudWatch dashboards can be shared across accounts
- Understand the cost structure of CloudWatch dashboards and the limitations of the service
- Review how monitored metrics go into an ALARM state
- Learn about the challenges of creating CloudWatch Alarms and the benefits of using machine learning in alarm management
- Know how to create a CloudWatch Alarm using Anomaly Detection
- Learn what types of metrics are suitable for use with Anomaly Detection
- Create your own CloudWatch log subscription
- Learn how AWS CloudTrail enables auditing and governance of your AWS account
- Understand how Amazon CloudWatch Logs enables you to monitor and store your system, application, and custom log files
- Explain what AWS CloudFormation is and what it’s used for
- Determine the benefits of AWS CloudFormation
- Understand what each of the core components are and what they are used for
- Create a CloudFormation Stack using an existing AWS template
- Learn what VPC flow logs are and what they are used for
- Determine options for operating programmatically with AWS, including the AWS CLI, APIs, and SDKs
- Learn about the capabilities of AWS Systems Manager for managing applications and infrastructure
- Understand how AWS Secrets Manager can be used to securely encrypt application secrets
In this quick lesson, I want to go over a powerful feature of Amazon CloudWatch - Cloudwatch Subscriptions, which can help you get access to a real-time feed of log events from CloudWatch logs - and can then deliver those logs to other services such as Kinesis streams, Firehouse, or Lambda for custom processing and analysis.
At its base level, Amazon Cloudwatch Logs provide a way to monitor, store, and access the log files from your various Amazon EC2 instances, CloudTrails, and other sources. It allows you to centralize these logs from all of your disparate systems and applications within a single place. This by itself is already very handy because from there we can easily view our logs, search through them, or filter them based on specific criteria.
But with amazon CloudWatch subscriptions we can start to do all of that in real time.
Getting Started With Amazon Cloudwatch Subscription
The first step to utilizing Amazon Cloudwatch Subscriptions is to create the receiving resource that will take in the logs created by the other systems. This resource could be a Kinesis stream for example, which is able to handle high volumes of data coming in quickly.
In order to direct the right information into the receiving resource, the Kinesis stream in this example, we need to set up a subscription filter. The subscription filter helps to define a pattern that we will look through the logs, to find matching event data, and deliver that information to our receiving resource.
Each subscription filter contains a few key elements that you need to set up and be aware of.
The first element is the Log Group Name - Which is the log group you are associating the subscription filter with. Any log events that are created within this log group will be subject to filtering, and when any matching elements are found - they will be sent on to the destination service.
The second element is the Filter Pattern itself - This is a description of how CloudWatch logs should interact with any data in the log events, and how it should filter that data - including restricting what data makes it to the destination resource.
Here are a few examples of these filter patterns:
- Matching
- You can match with everything - meaning all log events will get passed into the destination resource
- You can do single term Matching - only passing in log events that contain a term like “ERROR”
- You can Include a term AND exclude term by adding a ‘-’ that means with the previous example you could exclude any errors that exited by using “ERROR” - “Exiting”
- You can require matching multiple terms - this could look like “ERROR Exception” which would match any strings that contained both of those words like [ERROR] Terrible Exception has Occurred!
- Or Match
- Allows you to match multiple terms with the ‘?’ operator - for example you could look for “?ERROR ?WARN to find any messages that were errors or warnings and send them onwards to the destination resource
- You also have the ability to or match on specific word, positioned within a string. The format of this is a little different from the above though: When specifying a specific word required at a specific spot in a string it looks like [w1=ERROR, w2] You can also set this up with an OR operator as well which takes the form of [w1=ERROR || w1=WARN, w2] - the double pipe denoting or here instead of the ? operator - you can thank AWS for that strange change choice
Alright back to the subscription filter elements
The third element of the subscription filter is the Destination ARN This is the ARN of the destination resource we are going to send all matched data to. Again that is probably a kinesis stream, or maybe even a lambda function.
The fourth element is a simple role ARN this is the role that can grant CloudWatch logs the permission that are required to post the filter data into the destination resource. Everything in AWS needs to have permissions to access each other, and this is a part of that system.
And the final element is the distribution method - this element is for when you use kinesis as the destination resource. By default, the data will be grouped by log stream. You can have it distribute log data randomly as well for a more even distribution.
Logs that are sent to a receiving service through a subscription filter are Base64 encoded and compressed with the gzip format.
Cross Account log data sharing
Amazon Cloudwatch Subscription log data can also be shared across accounts. Being able to collaborate with owners of different AWS accounts is vital for multi account structures and organizations.
Unlike the previous example destination we talked about, where you could send your log data to Lambda, Kinesis Data Firehouse Streams, or Kinesis streams. Cross account sharing is solely limited to Kinesis streams, so bear that in mind when designing your architectures around this problem.
To share log data across accounts you will need to establish a log data sender and recipient:
The log data sender gets the data from the data source, and lets Cloudwatch logs know when that data is ready to be sent to a specified destination.
The log data recipient sets up a destination and lets Cloudwatch logs know that the recipient wants to receive log data.
Both of these elements require you to create a CloudWatch Logs Destination. A destination a made up of some familiar elements:
- A Destination Name
- Target ARN
- Role ARN
- Access Policy
They are very similar to the elements of the subscription filter we spoke of earlier.
There is an important caveat to be aware of, which is that the log group and the destination must be in the same AWS region. However, the kinesis stream that the destination points to, can be in a different region.
Analyzing and visualizing your data
Once you have your data coming in the way you expect it to, be that from another account or straight from the source, you can manipulate or visualize that information any way you see fit.
Many people use Kibana Dashboards to get a handle on data visualization as it's pretty simple to use and there are a ton of examples out there. Kibana runs on top of ElasticSearc h in order to provide its services - yet you may have noticed there was no mention of elastic search until now.
Cloudwatch Log Subscriptions does not natively support pushing data in ElasticSearch, but we can move it into kinesis as we have already shown earlier. From kinesis, we can begin to export the data over to elastic search using a CloudWatch Logs Subscription Consumer.
AWS says the following about this - “The CloudWatch Logs Subscription Consumer is a specialized Amazon Kinesis stream reader (based on the Amazon Kinesis Connector Library) that can help you deliver data from Amazon CloudWatch Logs to any other system in near real-time using a CloudWatch Logs Subscription Filter.”
Here is a link to the GitHub where you can download a sample project that contains this Specialized stream reader. I wish this was a native option with the services itself, but for the moment you will have to get it from the link here.
But with this new addition, we have the power to really see into our data and have the ability to make key decisions based on it.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.