1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. How to Implement & Enable Logging Across AWS Services (Part 1 of 2)

Course Summary

The course is part of these learning paths

DevOps Engineer – Professional Certification Preparation for AWS
course-steps 35 certification 5 lab-steps 18 quiz-steps 2 description 3
SysOps Administrator – Associate Certification Preparation for AWS
course-steps 35 certification 5 lab-steps 30 quiz-steps 4 description 5
Security - Specialty Certification Preparation for AWS
course-steps 22 certification 2 lab-steps 12 quiz-steps 5
AWS Services Monitoring & Auditing
course-steps 6 certification 1 lab-steps 3 quiz-steps 2
more_horiz See 1 more

Contents

keyboard_tab
Introduction
1
Introduction
PREVIEW3m 34s
Logging
Summary
play-arrow
Start course
Overview
DifficultyAdvanced
Duration1h 4m
Students736
Ratings
4.8/5
star star star star star-half

Description

Course Description

This course is part 1 of a 2 part course series which focuses on a number of key AWS services and how they perform logging and monitoring across your environment.  Being able to monitor data provides a number of key benefits to your organization, such as compliance, incident detection and resolution, trend analysis and much more! Collating data and statistics about your solutions running within AWS also provides the ability to optimize it's performance.  This series looks at how to implement, configure and deploy logging and monitoring mechanisms using the following AWS services and features

Part 1: 

  • Amazon CloudWatch - CloudWatch Monitoring Agent
  • AWS CloudTrail Logs
  • Monitoring CloudTrail Logs with CloudWatch Metric Filters
  • Amazon S3 Access Logs

Part 2:

  • Amazon CloudFront Access Logs
  • VPC Flow Logs
  • AWS Config Configuration History 
  • Filtering and searching data using Amazon Athena

The course for Part 2 can be found here

Learning Objectives

By the end of this course series you will be able to:

  • Understand why and when you should enable logging of key services
  • Configure logging to enhance incident resolution and security analysis
  • Understand how to extract specific data from logging data sets

Intended Audience

The content of this course is centered around security and compliance. As a result, this course is beneficial to those who are in the roles or their equivalent of:

  • Cloud Security Engineers
  • Cloud Security Architects
  • Cloud Administrators
  • Cloud Support & Operations
  • Compliance Managers

Prerequisites

This is an advanced level course series and so you should be familiar with the following services and understand their individual use case and feature sets.

  • Amazon CloudWatch
  • AWS CloudTrail
  • Amazon EC2
  • CloudFront
  • Lambda
  • AWS Config
  • Amazon S3
  • IAM
  • EC2 Systems Manager (SSM)

This course includes

7 lectures

6 demonstrations

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Resources Referenced

How to implement and enable logging across AWS services - Part 2 of 2

Transcript

Hello, and welcome to this final lecture within Part One of this two-part series relating to logging in AWS. In this lecture, I want to summarize and highlight the key points from the previous lectures. 

I started off by talking about the benefits of logging to your organization. Within this lecture, we learnt that logging allows you to rectify incidents quicker and more efficiently, or even prevent the incident from happening in the first place. Also logs created by services and applications contain a huge amount of information which is then recorded and retained for later use. Some logs can be monitored in real-time allowing automatic responses to be carried out depending on the data contents of the log, and logs are invaluable from an auditing perspective as they contain vast amounts of metadata. Logs can also be used to help achieve compliance. Using logs to ascertain the state of your environment before and after and even during an incident enables you to detect where the incident occurred. And by combining the monitoring of logs with thresholds and alerts, you can configure automatic notifications of potential issues, threats and incidents prior to them becoming a production issue. Using logs, you can establish a baseline of performance allowing you to determine anomalies easier through the use of various third-party tools and management services. and finally, having more data about your environment and how its running far outweighs the disadvantages of not having enough information. 

Following this lecture, I explained how CloudWatch Logs were configured and used. During this lecture, the following points were made. Amazon CloudWatch is a powerful tool that allows you to collect logs of your applications and a number of different AWS services. You are able to monitor the log stream in real-time and set up metric filters to search for specific events that you need to be alerted on or respond to. And CloudWatch Logs acts as a central repository for real-time monitoring of log data. The unified CloudWatfch agent allows you to collect logs and additional metric data from your EC2 instances and well as from on-premise servers. And to install the agent on your EC2 instances you need to create two roles. One role will be used to install the agent and also to send the additional metrics gathered to CloudWatch, and the other role is used to store a configuration information file in the parameter store within SSM. You then need to download and install the agent onto the EC2 instances using SSM and the Run Command and finally, configure and start the CloudWatch agent using a Wizard or the manual configuration. 

I then looked at a different service, which was AWS CloudTrail, which records and tracks all API requests made within your AWS account. Within this lecture, I explain the following. When an API request is initiated, AWS CloudTrail captures a request as an event and records this event within a log file which is then stored on S3. Each API call represents a new event within the log file and the logs generated are the output of the CloudTrail service. Log files are written in JSON, Javascript Object Notation format, much like access policies within IAM and S3. New logs are created approximately every five minutes but they are not delivered to the nominated S3 bucket for persistent storage for approximately 15 minutes after the API was called. The log files are held by the CloudTrail service until final processing has been completed. Log files are delivered to S3 and optionally CloudWatch Logs as well. And any logs that are delivered to S3 are given a standard naming convention of the following. AWS offers the ability to aggregate CloudTrail logs from multiple accounts into a single S3 bucket belonging to one of these accounts. But do be aware you are unable to aggregate CloudTrail Logs from multiple AWS accounts into CloudWatch Logs that belongs to a single AWS Account, and CloudTrails allows you to enable a feature called "Log file integrity validation," which allows you to verity that your log files have remained unchanged since CloudTrail delivered them to your chosen S3 bucket. And finally, when a log file is delivered to your S3 bucket, a hash is created for it by CloudTrail. 

At this point, we had looked at both CloudWatch and CloudTrail. So the following lecture looked at how you could use CloudWatch to monitor CloudTrail Logs. And here we learnt that the logs from CloudTrail can be sent to CloudWatch Logs allowing metrics and thresholds to be configured which, in turn, can utilize SNS notifications for specific events relating to API utility. CloudWatch allows for any event created by CloudTrail to be monitored and you can then configure your new and existing trails to use an existing CloudWatch Log Group, or have CloudTrail create a new one. To allow CloudTrail to deliver logs to CloudWatch, CloudTrail must have the following permissions given via role, and that's the CreateLogStream permission and PutLogEvents. CloudWatch Log Events have a size limitation of 256KB on the events that they process and adding CloudWatch metric filters allows you to perform analysis of your CloudTrail events within the log files. And finally, metric filters allow you to search and count a specific value or term within your events in your log file. 

The final lecture in Part 1 of this course series looked at S3 Access Logs. Within this lecture, the key points were as follows. Amazon S3 Access Logs collate data based on who has been accessing a particular S3 Bucket. And by default, when you create a new bucket, access logging is not enabled. S3 Access Logs are based upon a source bucket and a target bucket. The source bucket is the bucket in which you want to log access request for, and the target bucket is the bucket in which the access logs will be delivered to. It's best practice to use different buckets for both the source and the target bucket for ease of management. And remember, the source and target buckets needs to be in the same region. During the configuration of access logs using the Management Console, permissions for right access for the log delivery group which is a predefined Amazon S3 group which is used to deliver log files to your target buckets. And if you wanted to enable logging on the bucket programmatically, then you can do so using the S3 API or the AWS SDKs. When doing so, you need to configure the write access for the Log Delivery Groups on the Target bucket as an additional action. 

That now brings me to the end of this lecture and to the end of Part 1 of this course series. If you are ready to dive deeper into further AWS services relating to logging, including CloudFront Access Logs, VPC Flow Logs, AWS Config Logging, and how to filter data using Amazon Athena, then head over to Part Two, which can be found using the link on screen. 

If you have any feedback on this course, positive or negative, please do contact us at support@cloudacademy.com. Your feedback is greatly appreciated. 

Thank you for your time, and good luck with your continued learning of cloud computing. Thank you.

About the Author

Students55517
Labs1
Courses55
Learning paths36

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data centre and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 50+ courses relating to Cloud, most within the AWS category with a heavy focus on security and compliance

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.