Course Introduction
Amazon CloudWatch
Audit Logs
AWS CloudTrail
AWS Config
AWS Logging
Cost Management
AWS Systems Manager
What is the AWS Data Provider for SAP?
The course is part of this learning path
In this section of the AWS Certified: SAP on AWS Specialty learning path, we introduce you to strategies for operating and monitoring SAP workloads on AWS.
Learning Objectives
- Understand how to use Amazon CloudWatch, AWS CloudTrail, and AWS Config to manage and monitor SAP infrastructure on AWS
- Describe various AWS cost management tools including Cost Explorer, AWS Cost and Usage Reports, and AWS Budgets
- Understand how to automate patch and state operations for our SAP instances using AWS Systems Manager
- Explain how the AWS Data Provider for SAP is used to help gather performance-related data across AWS services
Prerequisites
The AWS Certified: SAP on AWS Specialty certification has been designed for anyone who has experience managing and operating SAP workloads. Ideally you’ll also have some exposure to the design and implementation of SAP workloads on AWS, including migrating these workloads from on-premises environments. Many exam questions will require a solutions architect level of knowledge for many AWS services. All of the AWS Cloud concepts introduced in this course will be explained and reinforced from the ground up.
Hello and welcome to this lecture where we will look at how AWS CloudTrail interacts with AWS CloudWatch and SNS to create a monitoring solution. In addition to S3, the logs from CloudTrail can be sent to CloudWatch Logs, which allows metrics and thresholds to be configured, which in turn, can utilize SNS notifications for specific events relating to API activity. CloudWatch allows for any event created by CloudTrail to be monitored. This enables a whole host of security monitoring checks to be utilized. A great example of this is to be notified when certain API calls requesting significant changes to your security groups or network access control lists within your VPC. Other examples of these checks that are common within organizations are API calls relating to it starting, stopping, rebooting, and terminating EC2 instances. If instances are being created that shouldn't be, then your AWS cost could rise dramatically and quickly. Also, if instances are being rebooted or stopped, this could have a severe impact to your services if they are not configured in a high availability and resilient solution. Changes to security policies within IAM and S3. If changes are being made to your policies that shouldn't be, access can be inadvertently removed for authorized users and access granted to unauthorized users, having a massive impact on operational services. Even a minor change to a policy can pave the way for an untrusted user to exploit the error. Failed login attempts to the Management Console. Monitoring failed attempts here can help to prevent unauthorized access at your environment's front door. API calls that result in failed authorization. Not only does CloudTrail track successful API calls whereby the correct authorization was met by the authenticated identify, but it also tracks unsuccessful API requests, too, which would likely be due to the permissions applied. Special attention should be given to these unsuccessful attempts, as this could be a malicious user trying to gain access. However, it could also be a legitimate user trying to access a resource they should have access to for their role, but the incorrect permissions had been applied with their associated IAM policy.
To configure CloudTrail to use CloudWatch, you must first create a trail. Once your trail has been created, you can then configure it to use an existing CloudWatch Log group or have CloudTrail create a new one. Having CloudTrail create a new one for you is recommended if it is your first time doing this, as CloudTrail will take care of all of the necessary roles, permissions, and polices required. You may be wondering why roles and policy are required, so let me give you a high-level overview of the simple process that takes place when sending CloudTrail logs to CloudWatch. When a log file is created by CloudTrail, it is sent to your selected S3 bucket and your chosen CloudWatch Log group, assuming your trail has been configured for this feature. To allow CloudTrail to deliver these logs to CloudWatch, CloudTrail must have the correct permissions and these are gained by assuming a role with the relevant permissions needed to run two CloudWatch APIs. The first being CreateLogStream, and this enables CloudTrail to create a CloudWatch Logs log stream in the log group, and PutLogEvents, and this allows CloudTrail to deliver CloudTrail events to the CloudWatch Logs log stream. CloudWatch then delivers logs to the CloudWatch Logs.
When using the AWS Management Console, you can have the CloudTrail create this role for you, along with the correct policy. By default, the role is called CloudTrail_CloudWatchLogs_Role. For those that are curious, the policy for this role looks as shown. It's important to point out that CloudWatch Log events have a size limitation of 256 kilobytes on the events that they can process. Therefore, any events that are larger than 256 kilobytes will not be sent to CloudWatch by CloudTrail.
Now that you have your logs with the associated events being sent to CloudWatch, you must then configure CloudWatch to perform analysis of your CloudTrail events within the log files. This is done by configuring and adding metric filters to the log within CloudWatch. These metric filters allow you to search and count a specific value or term within your events in your log file, which then allows for customizable thresholds to be applied against them. When creating these metric filters, you must create a filter pattern which determines what exactly you want CloudWatch to monitor and extract from your files. These filter patterns are usually fully customizable strings but as a result, a very specific pattern syntax is required. So, if you're creating these for the first time, you must understand the correct syntax.
Just to reiterate what we have spoken about so far, I want to provide a demonstration on how to edit an existing trail to configure it to send logs to CloudWatch Logs. I will then configure a metric filter with the associated metric pattern, and finally, I will set up an SNS alert to notify me when a particular threshold is met. So, let's take a look.
Start of demonstration
Okay, so what I need to start with is going into CloudTrail to edit an existing trail to enable CloudWatch Logs. So, if I go down to Management Tools and click on CloudTrail and then across to Trails, as you can currently see, under CloudWatch Logs log group, there's no log group selected. So, if we go into the trail and then scroll down to CloudWatch Logs, click on Configure, and there we can get CloudTrail to automatically set up this group and it'll create the necessary roles and permissions, etc. So, let's call this CloudTrail/Demo and then click on Continue. So, we've given it a name and here it just gives a message to say that for CloudTrail to deliver events and logs to CloudWatch Logs, it needs to assume a role with permissions to run two API calls, which are these two here. And if we go down into the details, you can see that the IAM role that it's going to use is the CloudTrail_CloudWatchLogs_Role and we'll ask it to create a new policy. And here's the policy document.
So, go down to Allow, and then if we scroll down to our CloudWatch Logs section, you can now see that we have a log group created in CloudWatch called CloudTrail/Demo. So, if we now go across to CloudWatch and if we click on Logs on the left-hand side here, we can see that we have our log group that was just created by CloudTrail, and it's CloudTrail/Demo. Now, if we go into our log group and select it, you'll see this log stream, which is the incoming stream of events being sent from CloudTrail. Now, as we've only just started, there's only a few events coming in here, so you might want to wait a few minutes before setting up your metric filters to give you more of a test pattern to search on. So, what I might do is just leave it a couple of minutes for some more events to start streaming in before we set up our metric filters here, just so we have something to search on.
Okay, so I've left it a few minutes, so let's go back into the log group and you can see we've now got a couple of streams, and if we go into these, we can see there's a lot more events. So, if we go back a couple of pages, back to our log group, now we need to create our metric filters to allow us to define what we want to search on within our logs. So, if we select the tick next to our log group and then go up to Create Metric Filter, and here within the metric filter, we need to define a filter pattern. Now, as explained earlier, filter pattern will define what we're actually searching for within our logs. So, for this example, I'll keep it fairly simple. I'm going to search for any API call that's been made from my machine, so from my IP address. So, for that, I need to enter the following command, ( $.sourceIPAddress equals 2.218.11.188, which is my IP address. And now we can test to make sure that that filter pattern's okay using this Test Pattern box here, and what that does, that'll run this test filter on some log data we see from this log here and the output of that log is in this box here. So, all we need to do is click on Test Pattern and we can see at the bottom here that it found 47 matches out of 50 events in the sample log. So, we know that the syntax is okay for this filter pattern, so I'm going to go ahead and assign this metric.
And we can see up here that we've got our filter name and our filter pattern, and I'm going to create a new name space for this metric and I'll call it Demo, and the metric name will be IPAddress. And then what we need to do is click on Create Filter. Now, as you can see, our filter has been created and we have the details in this screen here. Now, what we can do at this point is create an SNS alarm so it could be notified if a certain threshold was met. So, let's go ahead and do that.
So, the first thing that we need to do is add a name, so I'm going to call this SourceIPAddress and description will be Too many calls from my IP. Now, I'm going to set this to be 30. So, whenever my IP address is used as a source IP address that is greater or equal to 30 times for one consecutive period over five minutes, then I want it to set to a state of an alarm. And I want to be notified, so I'm going to enter a new list, give this a new topic, SourceIPAddressAlarm, and I want that to be sent to myself. So, as we can already see, with the current data it's got that it has already breached the alarm, but it has gone back down below, so we'll see how this goes and we'll create the alarm. And this is a message just to say that I need to subscribe to that AWS notification, and I can do that in just a few moments. So, if we go across to our Alarms, we can see that we have our source IP address alarm in the state of OK.
So, at the minute, it's currently below the 30 threshold. As soon as it goes above that, it will alarm and I will get a notification. Now, over the past few minutes, I've just been having some activity within the Management Console, and as we can now see, we do have an alarm on our alert. We can see that it just crossed the threshold, and so, I've received an email notification to say that it is now in a state of alarm. And if we take a quick look at that email, we can see here that it was crossed with a data point of 33 and the threshold was 30. So, that is how you set up CloudTrail to use CloudWatch with the inclusion of SNS to create alarms against API activity.
End of demonstration
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.