The course is part of these learning paths
See 4 moreAny information that helps to secure your Cloud infrastructure is of significant use to security engineers and architects. With AWS CloudTrail, you have the ability to capture all AWS API calls made by users and/or services.
Whenever an API request is made within your environment AWS CloudTrail can track that request with a host of metadata and record it in a Log which is then sent to AWS S3 for storage allowing your to view historical data of your API calls.
Having this information has a number of uses from both a security and a day-to-day operational perspective, but it also allows for additional compliance. Having an audited trail of requests which can be tracked back to a user or service, and even the IP address used, helps to maintain your required compliance levels.
This course provides a full explanation of the CloudTrail service, looking at what it does, how it does it, and what components and services it uses. It breaks down each of the configurable components allowing you to see exactly how it works and to what degree it can be configured.
It dives into permissions required to run and implement CloudTrail, covering roles and policies, along with an overview of S3 Bucket permissions required for log storage. There are also a number of demonstrations within the course showing first hand how to configure Trails and set up various controls and permissions giving you clear guidance on what to do.
CloudTrail Logs are examined to show you exactly how APIs are recorded and how this sensitive information can be encrypted using KMS and also shared between AWS Accounts.
If you have any feedback on this course, please let us know at support@cloudacademy.com.
Learning Objectives
- Understand what AWS CloudTrail is and how it works
- Understand permissions, trails, and logs in CloudTrail and how they are used
- Learn how to perform monitoring activities with the service
Intended Audience
- IT professionals responsible for cloud security: security consultants, security architects, security auditors, etc.
- Those studying for an AWS certification that requires knowledge of AWS CloudTrail
- Anyone with a general interest in AWS security
Prerequisites
To get the most out of this course, you should have a basic understanding of the following AWS services: Simple Storage Service (S3), Identity and Access Management (IAM), AWS CloudWatch, Simple Notification Service (SNS), and the Key Management Service (KMS).
Hello and welcome to this lecture on CloudTrail logs.
The logs are the output of the CloudTrail service and they hold all of the information relating to the API calls that have been captured. And so as a result it's important to know what you can do with these logs in order to maximize the benefit of the data they contain.
So what is a log file and what does it look like? Log files are written in JSON, JavaScript Object Notation format, much like access policies within IAM and S3. This is a small section of a log file. Every time an API is captured as per the corresponding Trail it's associated with, an event is written to the log. Remember, a new event is written for each API call. New logs are created approximately every five minutes or so, but they are not delivered to the nominated S3 bucket for approximately 15 minutes after the API was called. So if you are expecting to see a log file for an API you called seven minutes ago, then you might not see the log as expected for potentially another eight minutes. The log files are held by the CloudTrail service until final processing has been completed. Only then will it be delivered to the S3 bucket and optionally AWS CloudWatch logs.
When an event reflecting an API call is written to a log, a number of attributes are also written to the same event, capturing key data about that call as you can see from this example. Without going through every attribute here, I just want to point out some of the more interesting ones. These being eventName. This refers to the name of the actual API that was called. EventSource. This refers to the service as to which the API called was made against. EventTime. This is the time that the API call was made. SourceIPAddress. This disposes source IP address of the requester who made the API call. This is a great piece of information when trying to isolate an attacker from a security perspective. UserAgent. This is the agent method that the request was made through. Example values of these are signin.amazonaws.com. This is what we have in our example and it simply means that a user made this request from within the AWS management console. You also have console.amazonaws.com and this is the same as the previous. However if this was displayed, it would mean that the request was made by the root user of the account. And we also have lambda.amazonaws.com. This is fairly obvious and this would reflect that the request was made with AWS Lambda. UserIdentity. This contains a larger set of attributes that provides information on the identity that made the API request.
Once events have been written to the logs and then delivered and saved to S3, they are given a standard naming format of the following. The first three elements of this naming structure are self-explanatory. The account ID, name of the service delivering the log, CloudTrail and the region that it came from. The next part relates to the date and time, the year, month and days. The T indicates the next part is the time, reflecting hour and minutes. The Z simply means the time is in UTC. The unique string value is a random 16 digit alphanumeric character string that is simply used by CloudTrail as a unique file identifier to ensure that it doesn't get overwritten with the same name of another file. Currently the file name format is defaulted to json.gz which is a compressed gzip version of a JSON text file. Here is an example of a file name of an existing log file.
Whilst we are looking at structures, let me also talk about the bucket structure where your logs are stored. You may think the logs are all stored in one folder within your S3 bucket. However there is a lengthy but very useful folder structure as follows. Firstly you have your dedicated S3 bucket name that you selected during the creation of your Trail. Next is the prefix that is also configured during the Trail creation and is used to help you organize a folder structure for your logs, corresponding to different Trails. Following this, a fixed folder name of AWSLogs, followed by the originating AWS account ID. Then another fixed folder name of CloudTrail, indicating which service has delivered the logs. And after that, the region name of where the log file originated from. This is useful for when you have Trails that apply to multiple regions. The last three folders show the year, month and day that the log file was delivered. As you can see, although there are multiple folders underneath your nominated S3 bucket, it does provide an easy navigation method when looking for a specific log file. This folder structure comes into an even greater use if you have multiple AWS accounts delivering logs to the same S3 bucket.
Some organizations may be using more than one AWS account and having CloudTrail logs stored in different S3 buckets across multiple accounts can be inconvenient in certain circumstances and requires additional administration to manage. Thankfully AWS offers the ability to aggregate CloudTrail logs from multiple accounts into a single S3 bucket belonging to one of these accounts. This is why there is an account ID folder within your S3 bucket. Please note that you are unable to aggregate CloudTrail logs from multiple AWS accounts into CloudWatch logs. That belongs to a single AWS account.
So to have all your logs from all your accounts delivered to just one S3 bucket is a fairly simple process with the end result allowing you to essentially manage all your CloudTrail logs. Let's take a look at how this solution is configured. Firstly you need to enable CloudTrail by creating a Trail in AWS account that you want all log files to be delivered to. Permissions need to be applied to the destination S3 bucket, allowing cross-account access for CloudTrail. Follow the instructions from lecture four on how to set bucket policy permissions. Once permissions have been applied to your policy, you need to edit the bucket policy and add an additional line for each AWS account requiring access under the resource attribute in the section shown here.
Create a new Trail in your other AWS accounts and select to use an existing S3 bucket for the log files. When prompted add the bucket name used in step one and when alerted accept the warning that you want to use a bucket from a different AWS account. An important point to make here when configuring the bucket selection is to ensure that you use the same prefix as the one you used when you configured the bucket in step one. That is unless you intend to edit the bucket policy to allow CloudTrail to write to the location of a new prefix you wish to use. When you have configured your Trail, click create and your new Trail will now deliver its log files to the S3 bucket in your AWS account used in step one of this process. Again this is a great solution that allows you to essentially manage all of your CloudTrail logs in one single account in S3 bucket. However there may be users such as system administrators who manage the other AWS accounts where the logs have come from that might need to access the data within these logs.
So how would they gain access to the S3 bucket to allow them to only access their CloudTrail logs that originated from their AWS account? It could be done quite easily by configuring a few elements within IAM. Firstly, in the master account, IAM roles will need to be created for each of the other AWS accounts requiring read access. Secondly, a policy will need to be assigned to those roles, allowing access to the relevant AWS account's logs only. Lastly, users within the requesting AWS accounts would need to be able to assume this role to gain read access for their CloudTrail logs.
The easiest way to show you how to configure the permissions required is by a demonstration whereby I shall perform the following steps. I'll create a new role, apply a policy to this role to only allow access for AWS account B's folder in S3. I'll show the trust relationships between account A and account B. I'll then create a new user in account B and then I'll create a policy and apply the AssumeRole permissions to this user, allowing them to assume the new role we created in account A. So let's take a look at how and where we apply these permissions.
Okay so as I just said the first thing we need to do is create a new role in our primary account. So if we go across to IAM which is under Security, Identity & Compliance and then once that's loaded we need to go across to roles and then create new role. So let's give this role a name. We'll call it Cross-account-cloudtrail. Click on next step.
We then need to select a role type and what we want to do is select the role for cross-account access because we'll be allowing users in another AWS account to access the log files in this primary AWS account. And this will set up the trust relationship between this account and then my secondary account. So for that we will select this top option of providing access between AWS accounts you own. Then next I'll need to enter the secondary account ID that I want to create the trust relationship with. So I'll just enter that number. Okay and then after you have entered your account ID, click on next step.
And now we need to attach a policy to this role. Prior to this demo I set up my own policy and this allows cross account access to read only from my secondary account to the bucket on this primary account, but I'll explain this policy in a few moments and I'll show exactly what it contains. And from here click on next step.
And this is just a review of the role. So we have the role name, the ARN, the Amazon resource name, the trusted entities, so this is the secondary account ID that I entered and then the actual policy and then that link that we can give to users in the secondary account to allow them to switch roles. So create role. And there we go. The Cross-account-cloudtrail role that we just created.
So let's take a look at this. Firstly I'll show you the trust relationships. So because we added cross-account role access and then we entered the secondary AWS account ID, we can see that this account is trusted by our primary account and that allows entities in this account to assume this role. Now I mentioned earlier that I previously set up a policy with permissions in. So let's take a look at that policy. I named it Cross Account Read Only for CloudTrail so if I show the policy, I should say it's only a very small policy, very simple. And we have an effect of allow which will allow any S3 get and any S3 list command so essentially read only access on this resource here specified by this line. Now this resource links to the bucket and folder where CloudTrail logs are delivered for our secondary account, as you can see here. So essentially what this policy does is allow read only access to any folders within the secondary account's CloudTrail log folders. So this account won't be able to access any other account's CloudTrail logs which is important. So if we come out of this.
So let's just have a quick recap of what we've achieved so far. So, so far what we've done, we've created a role in our primary account for our secondary account access and we've also assigned an access policy to this role in order for the secondary AWS account to access the relevant folder in S3. So now what we need to do is assign a user in the secondary account and then apply the permissions to that user to enable them to assume the new role in the primary account. So let's go ahead and do that.
Okay so I've now logged into the secondary account where I need to create a new user and assign the correct permissions. So to start with I'm going to set up a permission policy to assign to the user. So if I go down to Security, Identity & Compliance and select IAM. And then go across to policies and from here I want to create a new policy. And I am going to create my own policy, so I'm going to select the bottom option. I'm going to call this AssumeRoleforCloudTrail. My description will be assume role in primary AWS account. And for the policy document I'm just going to paste in a policy that I've already created. As you can see it's only a very small policy again and we have an allow effect that allows the AssumeRole action from the security token service against the following resource. And this resource links back to a role on our primary account where we created the role Cross-account-cloudtrail. So this policy will allow the user to assume this role in the primary account. So let's go ahead and create that policy. Let's validate it first and then create.
Now what we need to do is to assign a user to use that policy. Now I created a new user earlier prior to this demo so let's just find our new policy that we just created and here it is at the bottom, AssumeRoleforCloudTrail. And I'm going to attach a user. And I've called our user CloudTrailuser1 and then attach policy. And there we go. So we now have one user attached to this policy.
So that's all the actions and steps necessary to allow a user in a secondary account to access CloudTrail log files that have been delivered to an S3 bucket in a primary account. And it would do this by using the permission policy that we just applied to that user to access the role in the primary account and that role has a policy attached that allows S3 read access to its own CloudTrail logs.
We won't go through it again, but recall that you can use KMS to encrypt your log files to offer an additional layer of security. I don't want to repeat the same information. However I just wanted to bring it to your attention again, highlighting that you can use KMS to offer great security of your log files. Remanding with the security aspect of your log files, CloudTrail allows you enable a feature called log file integrity validation which simply allows you to verify that your log files have remained unchanged since CloudTrail delivered them to your chosen S3 bucket. This is typically used for security and forensic investigations whereby the integrity of the log files are critical to confirm that they have not been tampered with in any way. Log file validation is configured during the Trail process as shown in the previous demonstration.
When a log file is delivered to your S3 bucket, a hash is created for it by CloudTrail. A hash value is a set of characters that are unique that are created from a data source, in this case the log file. The hashing algorithms used by CloudTrail are SHA-256. In addition to a hash, for every log file created CloudTrail creates a new file every hour called a digest file which is used to help verify your log files have not changed. This digest file contains details of all the logs delivered within the last hour, along with a hash for each of them. These files are stored within the same bucket as the log files. However they are within their own folder for easier management. These digest files are then signed by a private key of a public and private key pair. When it comes to verifying the integrity of your log files, the public key of the same key pair is used to programmatically check that logs have not been tampered with in any way.
Verification of the log files can only be achieved via a programmatic access and not via the console. Using the AWS CLI, this can be checked by issuing the following command. The red text shows the optional parameters you can use. The folder structure for the digest is very similar to the CloudTrail logs as you can see. The digest files are clearly distinguishable by the CloudTrail digest folder.
This has now taken us to the end of this lecture. Coming up next I'll explain how you can use CloudTrail and CloudWatch together as a monitoring solution.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.