Contents
Course Introduction
Amazon CloudWatch
AWS CloudTrail
AWS Config
AWS Organizations
AWS Control Tower
AWS Resource Access Manager
AWS Management
AWS Systems Manager
AWS Trusted Advisor Best Practices
AWS Logging
AWS Health Dashboard
AWS Data Visualization
AWS Data Pipeline vs. AWS Glue
Finding Compliance Data with AWS Artifact
AWS CloudFormation
Understanding SLAs in AWS
Observability in AWS
This section of the AWS Certified Solutions Architect - Professional learning path introduces the AWS management and governance services relevant to the AWS Certified Solutions Architect - Professional exam. These services are used to help you audit, monitor, and evaluate your AWS infrastructure and resources and form a core component of resilient and performant architectures.
Want more? Try a Lab Playground or do a Lab Challenge!
Learning Objectives
- Understand the benefits of using AWS CloudWatch and audit logs to manage your infrastructure
- Learn how to record and track API requests using AWS CloudTrail
- Learn what AWS Config is and its components
- Manage multi-account environments with AWS Organizations and Control Tower
- Learn how to carry out logging with CloudWatch, CloudTrail, CloudFront, and VPC Flow Logs
- Learn about AWS data transformation tools such as AWS Glue and data visualization services like Amazon Athena and QuickSight
- Learn how AWS CloudFormation can be used to represent your infrastructure as code (IaC)
- Understand SLAs in AWS
Transcript
Hello and welcome to this lecture covering VPC Flow Logs. Within your VPC, you could potentially have hundreds or even thousands of resources all communicating between different subnets both public and private and also between different VPCs through VPC peering connections. VPC Flow Logs allows you to capture IP traffic information that flows between your network interfaces of your resources within your VPC. This data is useful for a number of reasons, largely to help you resolve incidents with network communication and traffic flow in addition to being used for security purposes to help spot traffic reaching a destination that should be prohibited.
Unlike S3 access logs and CloudFront access logs, the log data generated by VPC Flow Logs is not stored in S3. Instead, the log data captured is sent to CloudWatch logs. Before creating your VPC Flow Logs, you should be aware of some of the limitations which might prevent you from implementing or configuring them. If you are running a VPC peered connection, then you'll only be able to see flow logs of peered VPCs that are within the same account. Or if you are still running resources within the EC2-Classic environment, then unfortunately you are not able to retrieve information from their interfaces. And once a VPC Flow Log has been created, it cannot be changed. To alter the VPC Flow Log configuration, you need to delete it and then recreate a new one.
In addition to this, the following traffic is not monitored and captured by the logs. DHCP traffic within the VPC, traffic from instances destined for the Amazon DNS Server. However, if you decide to use and implement your own DNS Server within your environment, then the traffic to this will be logged and recorded within the VPC Flow Log. Any traffic destined to the IP address for the VPC default router and traffic to and from the following addresses, 169.254.169.254 which is used for gathering instance metadata, and 169.254.169.123 which is used for the Amazon Time Sync Service. Traffic relating to an Amazon Windows activation license from a Windows instance and finally the traffic between a network load balancer interface and an endpoint network interface. All other traffic both ingress and egress can be captured at a network IP level.
You can set up and create a flow log against three separate resources. These being a network interface on one of your instances, a subnet within your VPC, and your VPC itself. Obviously for option two and three, this will contain a number of different resources. As a result, data is captured for all network interfaces either within the subnet or the VPC respectively. I mentioned earlier that this data is then sent to CloudWatch logs via a CloudWatch log group. For every network interface that publishes data to the CloudWatch log group, it will use a different log stream. And within each of these streams, there will be the flow log event data that shows the content of the log entries. Each of these logs captures data during a window of approximately 10 to 15 minutes.
To enable your flow log data to be pushed to a CloudWatch log group, an IAM role is required for permissions to do so. This role is selected during the setup configuration of the VPC Flow Log. If your role does not have the required permissions, then your log data will not be delivered to the CloudWatch group. At a minimum, the following permissions must be associated to the role. In addition to this, you will also need to ensure that the VPC Flow Log service can assume that IAM role to perform the delivery of logs to CloudWatch. This can be achieved with the following permissions.
While on the topic of permissions, I want to also show you the required permissions for someone to review and access the VPC Flow Logs or indeed be able to create one in the first place. The following three EC2 permissions allows you to create, delete, and describe flow logs. These being ec2:CreateFlowLogs, ec2:DeleteFlowLogs, and ec2:DescribeFlowLogs. The logs:GetLogData permissions is used to enable you to list log events from a data stream. If you wanted to create flow logs, then you need to also grant the use of the IAM permission of iam:passrole which allows the service to assume the role mentioned previously to create these flow logs on your behalf.
Let me now show you how to create a flow log for an interface on an instance, a subnet, and lastly the VPC itself.
Start of demonstration
Okay so firstly I'm going to set up a VPC Flow Log for the running instance that we've used in a previous demonstration which was for the logging server. So what I need to do is go down to our network interfaces under network and security and select the ENI of the logging server. As you can see, it's this bottom instance here. So if I select that interface, if I just drag this up a little bit, and we have three tabs here, details, flow logs, and tags. If we select the flow logs tab of this interface, we can see that there's no flow log created as yet.
What we need to do is click on create flow log. Now we can select the filter for this flow log to only log either accepted requests or rejected requests so I'm going to select all so it gets accepted and rejected. We now need to select our role and I created a role earlier and I called that Flow-Logs-Role so that has the required permissions to push data to CloudWatch logs. And here we have the ARN of the role. The destination log group for CloudWatch, I set up a log group prior to this demonstration and I've just called this Flow-Logs. And then click on create flow log. And that's it, it's as simple as that. So now you can see for this eni interface here, we now have a flow log created. It gives it a flow log ID. Shows the filter which we have ALL here. Their destination log group. The ARN of the role and it's currently active. So now any traffic going in and out of that interface on that EC2 instance will be captured and the data will be sent to the flow logs log group in CloudWatch. And let's take a look at how you set up flow logs for a subnet.
So let's go across to our VPC service. I have a couple of VPCs here and we'll use our logging VPC. So if we go down to our subnets, and let's select the public subnet for our logging VPC, now again we have the tabs for this subnet. We have the summary, route table, network ACL, etc, and we also again have the flow logs tab. Very simple process again. Click on create flow log. The same filters. Select the same role and the same log group. And then simply create flow log. And that's now having the flow logs enabled on this particular subnet so all traffic going in and out of this subnet will be captured and sent to the flow logs log group.
And for the VPC, it's very similar. So you simply select your VPC so we have our logging VPC here, again we have our flow logs tab. Create flow log. Select the role and the destination log group of flow logs and then create flow log and that's it. So it's very easy to set up your flow logs for your EC2 network interface clouds or your subnet or your entire VPC. And that's it.
End of demonstration
Let's now take a look at a record within one of these flow logs. When you access the logs, you will find each entry has the following syntax. These entries are defined as follows. Version, which is the version of the flow log itself. Account-id, this is your AWS account ID. Interface-id, this is the interface ID of which the log stream data applies to. Source address, this is the IP source address. Destination address, this is the IP destination address. Source port, this is the source port being used for the traffic. And the destination port is the destination port being used for the traffic. The protocol, this defines the protocol number being used for the traffic. Packets, this shows the total number of packets sent during the capture. Bytes, again this shows the total number of bytes sent during the capture. Start and end shows the timestamp of when the capture window started and finished. Action, this shows if the traffic was accepted or rejected by security groups and network access control lists. And the log-status shows the status of the logging through three different codes. OK, where data is being received by CloudWatch logs. NoData, this means there was no traffic to capture during the capture window. And SkipData, where some data within the log was captured due to an error.
One of the key fields from an incident response and troubleshooting perspective is the action field. For example, if you are troubleshooting an issue of traffic not being received by a particular resource, then you could check the VPC Flow Logs to see if the traffic is getting blocked at the subnet level by a network ACL. This will then allow you to review your entries within the NACL to make the changes that's necessary from a security perspective.
Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.