AWS Logging Mechanisms
The course is part of these learning pathsSee 1 more
This course is part 1 of a 2 part course series which focuses on a number of key AWS services and how they perform logging and monitoring across your environment. Being able to monitor data provides a number of key benefits to your organization, such as compliance, incident detection and resolution, trend analysis and much more! Collating data and statistics about your solutions running within AWS also provides the ability to optimize its performance. This series looks at how to implement, configure and deploy logging and monitoring mechanisms using the following AWS services and features
- Amazon CloudWatch - CloudWatch Monitoring Agent
- AWS CloudTrail Logs
- Monitoring CloudTrail Logs with CloudWatch Metric Filters
- Amazon S3 Access Logs
- Amazon CloudFront Access Logs
- VPC Flow Logs
- AWS Config Configuration History
- Filtering and searching data using Amazon Athena
The course for Part 2 can be found here
By the end of this course series you will be able to:
- Understand why and when you should enable logging of key services
- Configure logging to enhance incident resolution and security analysis
- Understand how to extract specific data from logging data sets
The content of this course is centered around security and compliance. As a result, this course is beneficial to those who are in the roles or their equivalent of:
- Cloud Security Engineers
- Cloud Security Architects
- Cloud Administrators
- Cloud Support & Operations
- Compliance Managers
This is an advanced level course series and so you should be familiar with the following services and understand their individual use case and feature sets.
- Amazon CloudWatch
- AWS CloudTrail
- Amazon EC2
- AWS Config
- Amazon S3
- EC2 Systems Manager (SSM)
This course includes
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Hello, and welcome to this final lecture within Part One of this two-part series relating to logging in AWS. In this lecture, I want to summarize and highlight the key points from the previous lectures.
I started off by talking about the benefits of logging to your organization. Within this lecture, we learnt that logging allows you to rectify incidents quicker and more efficiently, or even prevent the incident from happening in the first place. Also logs created by services and applications contain a huge amount of information which is then recorded and retained for later use. Some logs can be monitored in real-time allowing automatic responses to be carried out depending on the data contents of the log, and logs are invaluable from an auditing perspective as they contain vast amounts of metadata. Logs can also be used to help achieve compliance. Using logs to ascertain the state of your environment before and after and even during an incident enables you to detect where the incident occurred. And by combining the monitoring of logs with thresholds and alerts, you can configure automatic notifications of potential issues, threats and incidents prior to them becoming a production issue. Using logs, you can establish a baseline of performance allowing you to determine anomalies easier through the use of various third-party tools and management services. and finally, having more data about your environment and how its running far outweighs the disadvantages of not having enough information.
Following this lecture, I explained how CloudWatch Logs were configured and used. During this lecture, the following points were made. Amazon CloudWatch is a powerful tool that allows you to collect logs of your applications and a number of different AWS services. You are able to monitor the log stream in real-time and set up metric filters to search for specific events that you need to be alerted on or respond to. And CloudWatch Logs acts as a central repository for real-time monitoring of log data. The unified CloudWatfch agent allows you to collect logs and additional metric data from your EC2 instances and well as from on-premise servers. And to install the agent on your EC2 instances you need to create two roles. One role will be used to install the agent and also to send the additional metrics gathered to CloudWatch, and the other role is used to store a configuration information file in the parameter store within SSM. You then need to download and install the agent onto the EC2 instances using SSM and the Run Command and finally, configure and start the CloudWatch agent using a Wizard or the manual configuration.
At this point, we had looked at both CloudWatch and CloudTrail. So the following lecture looked at how you could use CloudWatch to monitor CloudTrail Logs. And here we learnt that the logs from CloudTrail can be sent to CloudWatch Logs allowing metrics and thresholds to be configured which, in turn, can utilize SNS notifications for specific events relating to API utility. CloudWatch allows for any event created by CloudTrail to be monitored and you can then configure your new and existing trails to use an existing CloudWatch Log Group, or have CloudTrail create a new one. To allow CloudTrail to deliver logs to CloudWatch, CloudTrail must have the following permissions given via role, and that's the CreateLogStream permission and PutLogEvents. CloudWatch Log Events have a size limitation of 256KB on the events that they process and adding CloudWatch metric filters allows you to perform analysis of your CloudTrail events within the log files. And finally, metric filters allow you to search and count a specific value or term within your events in your log file.
The final lecture in Part 1 of this course series looked at S3 Access Logs. Within this lecture, the key points were as follows. Amazon S3 Access Logs collate data based on who has been accessing a particular S3 Bucket. And by default, when you create a new bucket, access logging is not enabled. S3 Access Logs are based upon a source bucket and a target bucket. The source bucket is the bucket in which you want to log access request for, and the target bucket is the bucket in which the access logs will be delivered to. It's best practice to use different buckets for both the source and the target bucket for ease of management. And remember, the source and target buckets needs to be in the same region. During the configuration of access logs using the Management Console, permissions for right access for the log delivery group which is a predefined Amazon S3 group which is used to deliver log files to your target buckets. And if you wanted to enable logging on the bucket programmatically, then you can do so using the S3 API or the AWS SDKs. When doing so, you need to configure the write access for the Log Delivery Groups on the Target bucket as an additional action.
That now brings me to the end of this lecture and to the end of Part 1 of this course series. If you are ready to dive deeper into further AWS services relating to logging, including CloudFront Access Logs, VPC Flow Logs, AWS Config Logging, and how to filter data using Amazon Athena, then head over to Part Two, which can be found using the link on screen.
If you have any feedback on this course, positive or negative, please do contact us at firstname.lastname@example.org. Your feedback is greatly appreciated.
Thank you for your time, and good luck with your continued learning of cloud computing. Thank you.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 80+ courses relating to Cloud reaching over 100,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.