image
VPC Flow Logs

Contents

DOP-C02 Introduction
Amazon CloudWatch
3
4
Anomaly Detection
PREVIEW14m 35s
Advanced CloudFormation Skills
14
State Machines
PREVIEW8m 54s
15
Data Flow
19m 36s
AWS OpsWorks
21
Parameter Store vs. Secrets Manager
40
AWS Service Catalog
41
AWS Service Catalog
PREVIEW10m 34s
AWS Control Tower
47
AWS Control Tower
PREVIEW19m 56s
Managing Product Licenses
Amazon Managed Grafana
Amazon Managed Service for Prometheus
AWS Proton
57
AWS Resilience Hub

The course is part of this learning path

VPC Flow Logs
Difficulty
Intermediate
Duration
7h 24m
Students
80
Ratings
4.3/5
starstarstarstarstar-half
Description

This course provides detail on the AWS Management & Governance services relevant to the AWS Certified DevOps Engineer - Professional exam.

Want more? Try a lab playground or do a Lab Challenge!

Learning Objectives

  • Learn how AWS AppConfig can reduce errors in configuration changes and prevent application downtime
  • Understand how the AWS Cloud Development Kit (CDK) can be used to model and provision application resources using common programming languages
  • Get a high-level understanding of Amazon CloudWatch
  • Learn about the features and use cases of the service
  • Create your own CloudWatch dashboard to monitor the items that are important to you
  • Understand how CloudWatch dashboards can be shared across accounts
  • Understand the cost structure of CloudWatch dashboards and the limitations of the service
  • Review how monitored metrics go into an ALARM state
  • Learn about the challenges of creating CloudWatch Alarms and the benefits of using machine learning in alarm management
  • Know how to create a CloudWatch Alarm using Anomaly Detection
  • Learn what types of metrics are suitable for use with Anomaly Detection
  • Create your own CloudWatch log subscription
  • Learn how AWS CloudTrail enables auditing and governance of your AWS account
  • Understand how Amazon CloudWatch Logs enables you to monitor and store your system, application, and custom log files
  • Explain what AWS CloudFormation is and what it’s used for
  • Determine the benefits of AWS CloudFormation
  • Understand what the core components are and what they are used for
  • Create a CloudFormation Stack using an existing AWS template
  • Learn what VPC flow logs are and what they are used for
  • Determine options for operating programmatically with AWS, including the AWS CLI, APIs, and SDKs
  • Learn about the capabilities of AWS Systems Manager for managing applications and infrastructure
  • Understand how AWS Secrets Manager can be used to securely encrypt application secrets
Transcript

Transcript

Hello and welcome to this lecture covering VPC Flow Logs. Within your VPC, you could potentially have hundreds or even thousands of resources all communicating between different subnets both public and private and also between different VPCs through VPC peering connections. VPC Flow Logs allows you to capture IP traffic information that flows between your network interfaces of your resources within your VPC. This data is useful for a number of reasons, largely to help you resolve incidents with network communication and traffic flow in addition to being used for security purposes to help spot traffic reaching a destination that should be prohibited. 

Unlike S3 access logs and CloudFront access logs, the log data generated by VPC Flow Logs is not stored in S3. Instead, the log data captured is sent to CloudWatch logs. Before creating your VPC Flow Logs, you should be aware of some of the limitations which might prevent you from implementing or configuring them. If you are running a VPC peered connection, then you'll only be able to see flow logs of peered VPCs that are within the same account. Or if you are still running resources within the EC2-Classic environment, then unfortunately you are not able to retrieve information from their interfaces. And once a VPC Flow Log has been created, it cannot be changed. To alter the VPC Flow Log configuration, you need to delete it and then recreate a new one. 

In addition to this, the following traffic is not monitored and captured by the logs. DHCP traffic within the VPC, traffic from instances destined for the Amazon DNS Server. However, if you decide to use and implement your own DNS Server within your environment, then the traffic to this will be logged and recorded within the VPC Flow Log. Any traffic destined to the IP address for the VPC default router and traffic to and from the following addresses, 169.254.169.254 which is used for gathering instance metadata, and 169.254.169.123 which is used for the Amazon Time Sync Service. Traffic relating to an Amazon Windows activation license from a Windows instance and finally the traffic between a network load balancer interface and an endpoint network interface. All other traffic both ingress and egress can be captured at a network IP level. 

You can set up and create a flow log against three separate resources. These being a network interface on one of your instances, a subnet within your VPC, and your VPC itself. Obviously for option two and three, this will contain a number of different resources. As a result, data is captured for all network interfaces either within the subnet or the VPC respectively. I mentioned earlier that this data is then sent to CloudWatch logs via a CloudWatch log group. For every network interface that publishes data to the CloudWatch log group, it will use a different log stream. And within each of these streams, there will be the flow log event data that shows the content of the log entries. Each of these logs captures data during a window of approximately 10 to 15 minutes. 

To enable your flow log data to be pushed to a CloudWatch log group, an IAM role is required for permissions to do so. This role is selected during the setup configuration of the VPC Flow Log. If your role does not have the required permissions, then your log data will not be delivered to the CloudWatch group. At a minimum, the following permissions must be associated to the role. In addition to this, you will also need to ensure that the VPC Flow Log service can assume that IAM role to perform the delivery of logs to CloudWatch. This can be achieved with the following permissions. 

While on the topic of permissions, I want to also show you the required permissions for someone to review and access the VPC Flow Logs or indeed be able to create one in the first place. The following three EC2 permissions allows you to create, delete, and describe flow logs. These being ec2:CreateFlowLogs, ec2:DeleteFlowLogs, and ec2:DescribeFlowLogs. The logs:GetLogData permissions is used to enable you to list log events from a data stream. If you wanted to create flow logs, then you need to also grant the use of the IAM permission of iam:passrole which allows the service to assume the role mentioned previously to create these flow logs on your behalf. 

Let me now show you how to create a flow log for an interface on an instance, a subnet, and lastly the VPC itself. 

Start of demonstration

Okay so firstly I'm going to set up a VPC Flow Log for the running instance that we've used in a previous demonstration which was for the logging server. So what I need to do is go down to our network interfaces under network and security and select the ENI of the logging server. As you can see, it's this bottom instance here. So if I select that interface, if I just drag this up a little bit, and we have three tabs here, details, flow logs, and tags. If we select the flow logs tab of this interface, we can see that there's no flow log created as yet. 

What we need to do is click on create flow log. Now we can select the filter for this flow log to only log either accepted requests or rejected requests so I'm going to select all so it gets accepted and rejected. We now need to select our role and I created a role earlier and I called that Flow-Logs-Role so that has the required permissions to push data to CloudWatch logs. And here we have the ARN of the role. The destination log group for CloudWatch, I set up a log group prior to this demonstration and I've just called this Flow-Logs. And then click on create flow log. And that's it, it's as simple as that. So now you can see for this eni interface here, we now have a flow log created. It gives it a flow log ID. Shows the filter which we have ALL here. Their destination log group. The ARN of the role and it's currently active. So now any traffic going in and out of that interface on that EC2 instance will be captured and the data will be sent to the flow logs log group in CloudWatch. And let's take a look at how you set up flow logs for a subnet. 

So let's go across to our VPC service. I have a couple of VPCs here and we'll use our logging VPC. So if we go down to our subnets, and let's select the public subnet for our logging VPC, now again we have the tabs for this subnet. We have the summary, route table, network ACL, etc, and we also again have the flow logs tab. Very simple process again. Click on create flow log. The same filters. Select the same role and the same log group. And then simply create flow log. And that's now having the flow logs enabled on this particular subnet so all traffic going in and out of this subnet will be captured and sent to the flow logs log group. 

And for the VPC, it's very similar. So you simply select your VPC so we have our logging VPC here, again we have our flow logs tab. Create flow log. Select the role and the destination log group of flow logs and then create flow log and that's it. So it's very easy to set up your flow logs for your EC2 network interface clouds or your subnet or your entire VPC. And that's it. 

End of demonstration

Let's now take a look at a record within one of these flow logs. When you access the logs, you will find each entry has the following syntax. These entries are defined as follows. Version, which is the version of the flow log itself. Account-id, this is your AWS account ID. Interface-id, this is the interface ID of which the log stream data applies to. Source address, this is the IP source address. Destination address, this is the IP destination address. Source port, this is the source port being used for the traffic. And the destination port is the destination port being used for the traffic. The protocol, this defines the protocol number being used for the traffic. Packets, this shows the total number of packets sent during the capture. Bytes, again this shows the total number of bytes sent during the capture. Start and end shows the timestamp of when the capture window started and finished. Action, this shows if the traffic was accepted or rejected by security groups and network access control lists. And the log-status shows the status of the logging through three different codes. OK, where data is being received by CloudWatch logs. NoData, this means there was no traffic to capture during the capture window. And SkipData, where some data within the log was captured due to an error. 

One of the key fields from an incident response and troubleshooting perspective is the action field. For example, if you are troubleshooting an issue of traffic not being received by a particular resource, then you could check the VPC Flow Logs to see if the traffic is getting blocked at the subnet level by a network ACL. This will then allow you to review your entries within the NACL to make the changes that's necessary from a security perspective. 

 

About the Author
Students
229443
Labs
1
Courses
216
Learning Paths
173

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.