1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Jumbo Frames - Understanding, Building and Configuring

Jumbo Frames - VPC Demonstation

The course is part of this learning path

AWS Advanced Networking – Specialty Certification Preparation
course-steps 18 certification 1 lab-steps 8 quiz-steps 4

Contents

keyboard_tab
Jumbo Frames - Understanding, Building and Configuring
play-arrow
Start course
Overview
DifficultyAdvanced
Duration33m
Students649
Ratings
4.9/5
star star star star star-half

Description

In this demonstration we will configure 2 network paths between a pair of EC2 instances. The 1st network path will be configured with an MTU of 1500 - utilising Standard Ethernet Frames to transmit data across the network, and the 2nd network path will be configured with an MTU of 9000 - utilising Jumbo Ethernet Frames to transmit data across the network. We will use a CloudFormation template to launch and build this environment.

Transcript

- [Narrator] We'll now perform a demonstration of Jumbo Frames. In our demonstration, we'll configure two network paths between a pair of VC2 instances. The first network path will be configured with an MTU of 1500, utilizing standard ethernet frames to transmit data across the network. And the second network path will be configured with an MTU of 9,000, utilizing Jumbo ethernet frames to transmit data across the network. As you can see in this diagram, our demonstration will be composed of the following key architectural elements: one, a pair of VPCs are created with two subnets each; two, VPC pairing is established between the two subnets; three, an EC2 T2.micro instance is launched within each subnet; four, each EC2 instance is configured with a pair of ENOs, elastic network interfaces; five, the ENOs are deployed into different subnets; six, ENOs deployed into the opposing VPC top subnets are configured with an MTU of 1500 and will be registered as eth0 in the respective EC2 instances; seven, ENOs deployed into the opposing VPC bottom subnets are configured with an MTU of 9,000 and will be registered as eth1 in the respective EC2 instances; eight, policy-based routing is configured within each instance, such that the top subnet opposing ENOs are paired, and likewise the bottom subnet opposing ENOs are paired. We'll use a cloud formation template to launch and build this environment. Once the environment is up, we'll SSH into the right side instance and configure a listener service that can be sent data. We'll additionally set up a packet capture so that we can analyze the packets that are received on the first interface, eth0, which was been configured with an MTU of 1500. We'll then SSH onto the left hand side instance and execute a program to send exactly one megabyte to the other instance via the private IP assigned to its eth0 interface. This data will be transferred with an MTU of 1500. We'll then repeat the same exercise but via the eth1 interfaces, which are configured with an MTU of 9,000. Finally, we'll do some analysis on the packet captures, performing frame counts and aggregating the individual frame payload bytes to ensure that exactly one megabyte of data was received. Alright, let's fire this thing up. The cloud formation template used within the following demonstration is hosted online at Cloud Academy's public GitHub repository. Navigate to the GitHub URL, as seen here. Next, copy the HTTPS clone URL. Within your own terminal, create a new directory. Move into that directory and perform a Git clone with the URL you previously copied. Let's now list the contents of the current directory. Next, navigate down into the VPC Jumbo Frames cloud formation directory. Listing the contents of the cloud formation directory, we see a single cloud formation template, which we'll now use to provision left hand side and right hand side VPCs. We'll now take a quick look at the contents of the cloud formation template. Here, the first VPC, or the left hand side VPC in the diagram, is sized with a 10.0.0.0/18 address block. The second VPC, or the right hand side VPC in the diagram, is sized with a 192.168.0.0/18 address block. Both VPCs contain two subnets each. Each has an internet gateway attached purely to allow us to SSH in over the internet to the hosted EC2 instances. Now, the only other main elements in the cloud formation templates are the hosted EC2 instances themselves. Let's take a look at the left hand side EC2 instance. As you can see, this instance is configured with dual Elastic Network Interfaces, as is the right hand side EC2 instance. We bootstrap both EC2 instances with the following bash script embedded in the respective user data sections. The bootstrapping script sets up, most importantly, our MTU settings and route-based policies, such that the eth0 interfaces are configured with a standard MTU of 1500, and route traffic to each other, and that the eth1 interfaces are configured with a Jumbo Frame MTU of 9,000, and also route traffic to each other. The only other point of interest in this bootstrapping script are the commands used to disable large send offload. LSO is a feature on modern ethernet adapters that allows the TCP/IP network stack to build a large TCP message of up to 64 kilobytes in length before sending to the ethernet adapter. We disable it so that the packet captures we take are not skewed in terms of the captured payload sizes, which is the case if left on. Starting from within the AWS console, select an appropriate region into which you'll launch the cloud formation stack. Before we launch the cloud formation stack, ensure that an SSH key pair exists within the chosen region and for which you're in possession of the private key, as this will be referenced by the cloud formation template. To do this, starting from the AWS console home, click on the EC2 service link. Next, click the key pairs link under network and security. Confirm that an existing SSH key pair exists, and if not, create a new one. We'll now launch the cloud formation stack. Recall that this will build both the left hand side and right hand side VPCs. Navigate to the cloud formation service. Now click the create new stack button. We then select the upload option and then click the choose file button. Navigate to and select the cloudacademy.2vpc.multinet.yaml from our recently Git cloned repository. Click the next button near the bottom of the current page. Give the cloud formation stack a name. Here, we've chosen to call ours VPC-JumboFrames. Select the appropriate SSH key name from the dropdown and then click next. Leave all defaults as is and click next again. Finally, click the create button. This will launch our left hand side and right hand side VPCs. This will take approximately five to 10 minutes to complete. After this cloud formation stack launch completes successfully, signaled by the CREATE_COMPLETE status, navigate to the output tab and take note of the outputs. These will be important to us in the coming steps. The two outputs represent the elastic IP address we will use to remotely SSH into the respective test instances. Let's now SSH into the right hand side instance. This is referred to as the VPC1 receiver instance. Make sure that you have SSHed into the receiver. This can be determined by the banner displayed within the remote session. Next, let's take a look at the configuration of the interfaces. First, let's run the IF config command. As you can see, there are two interfaces, eth0 and eth1. Examining their respective MTUs, we can see that eth0 is configured with an MTU of 1500, and eth1 is configured with an MTU of 9,000. Let's now view the config files that establish these MTU settings. We'll first new to elevate our permissions. Let's call upon sudo for that. Next, navigate to /etc/dhcp directory and list the contents. Now we'll simply display the contents of the dhclient-eth0.conf and dhclient-eth1.conf files. These files are used to permanently capture the respective MTUs. Okay, let's take a look at our policy-based routing configuration. We do so by first running the command IP rule list. Here we can see our two custom routing policy rules. Any traffic destined to 10.0.10.0/24 will use custom routing table one. And any traffic destined for the 10.0.20.0/24 will use custom routing table two. Let's now look at the contents of table one. We do so by running the command IP route show table table1. As you can see, there's a single rule that states traffic destined for 10.0.10.0/24 will leave via interface eth0 and therefore will have an MTU of 1500. Note that the traffic will hop via the local AWS cloud router, as per the stated address of 192.168.10.1. Let's now look at the contents of table two. We do so by running the command IP route show table table2. As you can see, there's a single rule that states traffic destined for 10.0.20.0/24 will leave via interface eth1 and therefore will have an MTU of 9,000. Note the traffic will hop via the local AWS cloud router, as per the stated address 192.168.20.1. Okay, let's exit sudo. Next, let's jump into the /temp directory and list its contents. There are a number of script files that have been created for us by our user data embedded bootstrapping script. Let's first view the contents of the receiver.sh script. This script is used to set up a listener service on port 10,000. The listener service uses Netcat to listen for any data arriving on port 10,000 and then simply dumps it out to a null device. By avoiding any disk writes, which would happen if we redirected the incoming data to a file, we can measure throughput and negate any latency due to disk read or writes. This will be important for us when we compare the effect of using different MTU settings for data transmission. Let's start up the receiver script and put it into the background. We do so by running this command. Now run the command jobs. And we can see that it's sitting there waiting for incoming traffic. Leave this terminal open and fire up a new terminal. Let's now SSH into the left hand side instance. This is referred to as the VPC1 sender instance. Make sure that you've SSHed into the sender. This can be determined by the banner displayed within the remote session. Next, let's take a look at the configuration of the interfaces. First, let's run the IF config command. As you can see, there are two interfaces, eth0 and eth1. Examining their respective MTUs, we can see that eth0 is configured with an MTU of 1500, and eth1 is configured with an MTU of 9,000. Let's now view the config files that establish these MTU settings. We'll need to first elevate our permissions. Let's call upon sudo for that. Next, navigate to the /etc/dhcp directory and list the contents. Now we'll simply display the contents of the dhclient-eth0.conf and dhclient-eth1.conf files. These files are used to permanently configure the respective MTUs. Okay, let's take a look at our policy-based routing configuration. We do so by first running the command IP rule list. Here we can see our two custom routing policy rules. Any traffic destined to 192.168.10.0/24 will use custom routing table one. And any traffic destined to 192.168.20.0/24 will use custom routing table two. Let's now look at the contents of table one. We do so by running the command IP route show table table1. As you can see, there's a single rule that states traffic destined for 192.168.10.0/24 will leave via interface eth0 and therefore will have an MTU of 1500. Note the traffic will hop via the local AWS cloud router, as per the stated address 10.0.10.1. Let's now look at the contents of table two. We do so by running the command IP route show table table2. As you can see, there's a single rule that states traffic destined for 192.168.20.0/24 will leave via interface eth1 and therefore will have an MTU of 9,000. Note the traffic will hop via the local AWS cloud router, as per the stated address 10.0.20.1. Okay, let's exit sudo. Next, let's jump into the /temp directory and list its contents. You can see that we have two scripts which by their given names tell us something about their purpose. Let's list the contents of the send.via.eth0.sh script. As you can see, this script is designed to use the DD utility to generate one megabyte of data and pipe this out to the Netcat utility, which in turn sends all the data over to port 10,000 listening on the receiver host's eth0 private IP address. The command has some additional timing paths, which allows us to measure the time taken for the sending and receiving of one megabyte of data. Because this script is designed to send the data to the receiver's eth0 interface, the data will be transferred via the respective eth0 interfaces and therefore will use an MTU of 1500. Let's kick off this script. The script should run and return very quickly. The output of the script shows us a measure of the time taken and the throughput. Let's now list the contents of the equivalent eth1 send script. The script is identical in structure to the previous send script, except that the data is sent to the receiver's eth1 interface and therefore will use an MTU of 9,000. Let's kick off this script. This script should run and return very quickly. The output of this script again shows us a measure of the time taken and the throughput. Comparing the two outputs, we can clearly see that the script which uses the Jumbo Frame size MTU of 9,000 is both quicker in terms of execution time and overall throughput. This is an expected benefit of using Jumbo Frame MTUs. Now we'll jump back to the receiver terminal session. Next, let's list the temp directory contents again. Viewing the contents of the eth0 packet capture script, you can see this script is designed to use tcpdump to capture all packets destined for the eth0 interface and port 10,000. All captured packets will be written out to the capture file named packet.1Mb.eth0.dump. Let's start this script up. We'll need to elevate our permissions for this script, so let's sudo up. Now jump back to the sender terminal and execute the eth0 sender script to generate our one megabyte of traffic over the eth0 interfaces. Jumping back to the receiver terminal, kill the running packet capture script. Listing the contents of the directory again, we can see our new eth0 packet capture dump file. We'll now perform some quick analysis over this packet capture dump file. Firstly, let's take a look at the raw data. Here we can clearly see that the majority of the packets captured have an MTU length of 1500. Next, let's run the script to sum up all the individual frame payload links. The sum should be exactly the number of bytes the sender script sent, which is 1024 by 1024, equaling 1048576 bytes, or one megabyte. Here you can see the numbers match, perfect. Finally, let's establish a frame count for the one megabyte data transmission. Okay, let's now repeat the same packet capture exercise but using the equivalent eth1 script and therefore the associated Jumbo Frame 9,000 MTU network pathway. If all goes well, we should see the exact number of bytes captured but with much less frames captured. We begin by viewing the contents of the eth1 packet capture script. You can see this script is designed to use tcpdump to capture all packets destined for the eth1 interface and port 10,000. All captured packets will be written out to the capture file named packets.1Mb.eth1.dump. Let's start this script up. Now jump back to the sender terminal and execute the eth1 sender script to generate our one megabyte of traffic over the eth1 interfaces. Jumping back to the receiver terminal, kill the running packet capture script. Listing the contents of the directory again, we can see our new eth1 packet capture dump file. We'll now perform some quick analysis over this packet capture dump file. Firstly, let's take a look at the raw data. Here we can clearly see that the majority of the packets captured have an MTU length of 9,000. Next, let's run this script to sum up all the individual frame payload links. The sum should exactly be the number of bytes the sender script sent, which is 1024 by 1024, equaling 1048576 bytes, or one megabyte. Here you can see the numbers match, perfect. Finally, let's establish a frame count for the one megabyte data transmission when utilizing Jumbo Frames. As you can see, the frame count is significantly less, as expected. One final interesting calculation we'll show: if we divide the frame count for the one megabyte of traffic sent between the eth0 interfaces by the frame count for the one megabyte of traffic sent between the eth1 interfaces, then this will closely approximately equate to the same value of 9,000 divided by 1500, or the value six, which in itself was earlier explained by the diagram now re-shown. As previously explained, a Jumbo Frame has less than six times the overhead when compared to a standard frame and six times more payload. This enhanced payload capacity results in better and more efficient channel utilization and as just witnessed by this demonstration.

About the Author

Students8648
Labs26
Courses58
Learning paths13

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.