Course Introduction
Amazon CloudWatch
Audit Logs
AWS CloudTrail
AWS Config
AWS Logging
Cost Management
AWS Systems Manager
What is the AWS Data Provider for SAP?
The course is part of this learning path
In this section of the AWS Certified: SAP on AWS Specialty learning path, we introduce you to strategies for operating and monitoring SAP workloads on AWS.
Learning Objectives
- Understand how to use Amazon CloudWatch, AWS CloudTrail, and AWS Config to manage and monitor SAP infrastructure on AWS
- Describe various AWS cost management tools including Cost Explorer, AWS Cost and Usage Reports, and AWS Budgets
- Understand how to automate patch and state operations for our SAP instances using AWS Systems Manager
- Explain how the AWS Data Provider for SAP is used to help gather performance-related data across AWS services
Prerequisites
The AWS Certified: SAP on AWS Specialty certification has been designed for anyone who has experience managing and operating SAP workloads. Ideally you’ll also have some exposure to the design and implementation of SAP workloads on AWS, including migrating these workloads from on-premises environments. Many exam questions will require a solutions architect level of knowledge for many AWS services. All of the AWS Cloud concepts introduced in this course will be explained and reinforced from the ground up.
Resources Referenced
Web distribution log file format
RTMP distribution log file format
Transcript
Hello, and welcome to this lecture focusing on the access logs generated by Amazon CloudFront. Amazon CloudFront is AWS's content delivery network that speeds up distribution of your static and dynamic content through its worldwide network of edge locations. When you use a request content that you're hosting through Amazon CloudFront, the request is routed to the closest edge location which provides it the lowest latency to deliver the best performance. When CloudFront access logs are enabled you can record the request from each user requesting access to your website and distribution. As with S3 access logs, these logs are also stored on Amazon S3 for durable and persistent storage. There are no charges for enabling logging itself, however, as the logs are stored in S3 you will be stored for the storage used by S3.
The logging process takes place at the edge location and on a per-distribution basis, meaning that there will not be data written to a log that belongs to more than one distribution. For example, distribution a, b, c, will be saved in a different log to that of distribution d, e, f. When multiple edge locations are used for the same distribution, a single log file is generated for that distribution and all edge locations write to the single file.
The log files capture data over a period of time and depending on the amount of requests that are received by Amazon CloudFront for that distribution will depend on the amount of log fils that are generated. It's important to know that these log files are not created or written to on S3. S3 is simply where they are delivered to once the log file is full. Amazon CloudFront retains these logs until they are ready to be delivered to S3. Again, depending on the size of these log files this delivery can take between one and 24 hours.
When these log files are delivered they use a standard naming convention as follows. So let's say for example you had the following settings. The bucket name was access-logs, the prefix was web-app-a, and you had the following distribution ID. Then your name and convention for the log would look something like this. Let me now show you a very simple demonstration on how to enable log in for your CloudFront distribution.
Start of demonstration
So setting up access logs for your CloudFront distributions is very simple and easy to do. So let's go into CloudFront. I'll just select an existing distribution here, and then if you click on distribution settings and under the general tab you select edit, and then if we scroll down these settings here you'll see a section where it starts referring to logging. And at the moment I have logging off. So to enable logging I simply click on on and then I select the bucket in S3 where I want the access logs to reside, so I'm going to select CloudFront Access Logs, which is an existing bucket I have set up for this. Now here I can add a log prefix if I want to, if I've got different distributions, etc. I'm just going to leave that as blank for this demonstration. And here we can have cooking logging on or off, which will log all cookie data within the request, and it's as simple as that. And then once you're happy with that you just click on yes to confirm your changes. And now any access requests that go via your CloudFront distribution will be logged via S3. And that's it.
End of demonstration
To perform the demonstration that I just completed and to access the logs when they are stored, you will need specific permissions to the S3 bucket designated for logging. To enable the log in for your distribution, the user account activating that feature must have full control on the ACL for the S3 bucket, along with the S3 GetBucketAcl and S3 PutBucketAcl. The reason for this is that during the configuration process, CloudFront will use your credentials to add the AWS data-feeds account to the ACL with full control access. This is an account used by AWS which will write the data to the log file and deliver it to your designated S3 login bucket. Therefore, if you're trying to enable the login feature for your distribution and it's failing, then you should check your access to ensure you have the required permissions.
Depending on the delivery type of your CloudFront distribution, either WEB or RTMP, the log output will vary. The number of fields within the log files differ between the two types. Web distributions have a total of 26 different fill types for each entry within the log, whereas the RTMP distributions only have 13. I won't go through every single field explaining their purpose and use, however, I want to highlight a few points of interest starting with the web delivery type. These logs contain information which allow you to identify the following. The date and timestamp of the request of the user and which edge location received this request, source metadata of the requester including IP address details, HTTP access method of the request, such as PUT, DELETE, or GET, etc., the HTTP status codes of the request such as 200, the distribution domain name relating to the request, and the encryption and protocol data used in request such as SSL, V3, or AES256-SHA. For full information on each field and options please see the following link.
Now looking at the RTMP delivery type, the points of interest are as follows. Again, a timestamp of the request of the user and which edge location received this request, the source IP address of the requester, the event being carried out by the requester such as play, pause, or stop, and the URL of the page where your SWF file is linked to. Again, for full information on field data captured within RTMP logs you can view the following link here.
One final feature of logging with CloudFront is cooking logging. If you enable this within your distribution, then CloudFront will include all cookie information with your CloudFront access log data. This is only recommended if your origin of your distribution points to anything other than S3 such as an EC2 instance as S3 does not process cookie data.
That now brings me to the end of this lecture covering AWS CloudFront logs. Coming up next I shall be looking at the logs generated at the network level within your VPC with the VPC flow logs.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.