Introduction to Amazon EFS
EFS in Practice
Amazon Elastic Block Store (EBS)
SAA-C02- Exam Prep
The course is part of this learning path
This section of the Solution Architect Associate learning path introduces you to the core storage concepts and services relevant to the SAA-C02 exam. We start with an introduction to the AWS storage services, understand the options available and learn how to select and apply AWS storage services to meet specific requirements.
- Obtain an in-depth understanding of Amazon S3 - Simple Storage Service
- Get both a theoretical and practical understanding of EFS
- Learn how to create an EFS file system, manage EFS security, and import data in EFS
- Learn about EC2 storage and Elastic Block Store
- Learn about the services available in AWS to optimize your storage
Hello and welcome to this lecture where I am going to look at what Server-Access Logging is. In a nutshell, when server-access logging is enabled on a bucket it captures details of requests that are made to that bucket and its objects. Logging is important when it comes to security, root-cause cause analysis following incidents, and it can also be required to conform to specific audit and governance certifications.
Server-access logging, however, is not guaranteed and is conducted on a best-effort basis by S3. The logs themselves are collated and sent every few hours, or potentially sooner. There is no hard and fast rule that dictates that every request will be captured and that you will receive a log for a specific request within a set time frame.
Enabling logging Enabling access logging on your buckets is a very simple process using the S3 Management Console.
Enable server-access logging on an existing bucket Firstly select your bucket, and from the Properties tab you will see the Server-access logging tile. By default, this setting is disabled, as you can see. To enable it simply select the tile and you will be presented with the following screen. Select enable logging, and this gives you 2 options to complete the configuration. Firstly you need to select a target bucket. This target bucket will be used to store any logs created by enabling server access logging on your source bucket, which must be in the same region. For management and organization, you can additionally add a target prefix that s3 will add to the logs from your source bucket. When you have selected your Target bucket and added an optional prefix, select save.
Additionally, you can also enable logging on your bucket during its creation. Again, you must select a Target bucket and an optional prefix.
To allow S3 to write access logs to a target bucket, it will, of course, require specific permissions. These permissions will require write access for a group known as the Log Delivery group, which is a pre-defined Amazon S3 group used to deliver log files to your target buckets. If the configuration of your access logging is configured using the management console, then the enablement of logging automatically adds the Log Delivery group to the ACL (Access Control List) of the target bucket, allowing the relevant access. However, if you were to configure the access logging using the S3 API or AWS SDKs, then you would need to manually configure these permissions manually, more information on this process can be found here.
Following the example above, if I look at the Access Control List under the Permissions tab of the target bucket ‘s3deepdivelogging’ I can see that the Log Delivery Group has both Write and Read access to the bucket.
Before I continue, there are some points regarding the configuration of server-access logging that you need to be aware of. Firstly, and as I’ve already mentioned, both the source and target buckets should be in the same region, and it’s a best practice that different buckets are used for each. Also, the permissions of the S3 Access Log Group can only be assigned via Access Control Lists and not through bucket policies, so when manually setting the permissions for this via an SDK, you must update the ACL. Finally, if you have encryption enabled on your target bucket, access logs will only be delivered if this is set to SSE-S3 (Server-side encryption managed by S3) as encryption with KMS (Key Management Service) is not supported.
When the logs start arriving in the target bucket, the names will be presented following a standard naming pattern. In this example of S3 access logs, I entered a target prefix of ‘logs’ and so all of the logs will start with that prefix. Following any prefix that has been set, the naming convention is as follows: YYYY-mm-DD-HH-MM-SS-UniqueString/ This defines the year, followed by the month, followed by the day. Then the time in hours, minutes and seconds, and finally a unique string to avoid duplication of log names.
Let me now take a look at the contents of one of these log files to understand the information that they contain.
This is an example of a single entry in one of the logs, which is seperated by space-delimited fields.
Let me break this down into each section so you can see how it’s constructed and what each element respresents:
Bucket owner - Represents the canonical user ID of the owner of the Source bucket. The canonical user ID is used for cross-account access via bucket policies.
Bucket - This shows the name of the bucket related to the request.
Time - This is a timestamp of the request in UTC (Coordinated Universal Time).
Remote IP Address - Represents the internet address of the identity carrying out the request.
Requester - For authenticated users, this field will show the IAM identity. For any unauthenticated users a hyphen (-) would be displayed instead.
Request ID - A random string to identify each request.
Operation - This will display the operation of the request that was carried out.
Key - The "key" part of the request, URL encoded, or if no key parameter is used then a hyphen will be displayed as in this example. A hyphen in any field of the request indicates that the available data was not known or was not applicable for the request.
Request URI - This represents the Request-URI element of the HTTP request.
HTTP Status - This displays the HTTP status returned from the request as a numeric value.
Error Code - If an error was experienced, then S3 will return the error code received.
Bytes Sent - The number of bytes sent as a response.
Object Size - The size of the object in question in the request.
Total Time: - Measured in milliseconds, it represents how long the request took from receiving the request to the last byte of sending a response.
Turn-Around Time - This shows how long it took S3 to process the request.
Referer - The value is taken from the HTTP referer header, however, in this case, there was none present and so a hyphen is represented as the value.
User-Agent - This shows the value taken from the HTTP user-agent header.
Version ID - If present, this will show the Version ID of the request.
Host Id - The x-amz-id-2 or Amazon S3 extended request ID. The x-amz-id-2 header is a token that is used together with the x-amz-request-id header to help AWS troubleshoot problems.
Signature Version - This will show which signature version was used to authenticate the request.
Cipher Suite - If SSL was used it will show which cipher suite was used. If HTTP was used, then a hyphen would be shown instead.
Authentication Type - This shows the type of authentication used for the request.
Host Header - Represents the endpoints used to connect to Amazon S3 in the request.
TLS Version - This shows which version of TLS was used by the client.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 90+ courses relating to Cloud reaching over 140,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.