How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resilient environment. Your S3 objects are likely being read and accessed by your applications, other AWS services, and end users, but is it optimized for its best performance? This post will discuss some of the mechanisms and techniques that you can apply to ensure you are getting the most optimal performance when using Amazon S3.

How to optimize Amazon S3 performance: Four best practices

 

Four best practices when working with S3

1. TCP Window Scaling

This is a method used which enables you to enhance your network throughput performance by modifying the header within the TCP packet using a window scale which allows you to send data in a single segment larger than the default 64KB. This isn’t something specific that you can only do with Amazon S3, this is something that operates at the protocol level and so you can perform window scaling on your client when connecting to any other server using this protocol. More information on this can be found in RFC-1323

When TCP establishes a connection between a source and destination, a 3-way handshake takes place which originates from the source (client). So logically when looking at this from an S3 perspective, your client might need to upload an object to S3. Before this can happen a connection to the S3 servers needs to be created. The client will send a TCP packet with a specified TCP window scale factor in the header, this initial TCP request is known as a SYN request, part 1 of the 3-way handshake. S3 will receive this request and respond with a SYN/ACK message back to the client with it’s supported window scale factor, this is part 2. Part 3 then involved an ACK message back to the S3 server acknowledging the response. On completion of this 3 way handshake, a connection is then established and data can be sent between the client and S3.

By increasing the window size with a scale factor (window scaling) it allows you to send larger quantities of data in a single segment and therefore allowing you to send more data at a quicker rate.

Window Scaling

2. TCP Selective Acknowledgement (SACK)

Sometimes multiple packets can be lost when using TCP and understanding which packets have been lost can be difficult to ascertain within a TCP window. As a result, sometimes all of the packets can be resent, but some of these packets may have already been received by the receiver and so this is ineffective. By using TCP selective acknowledgment (SACK), it helps performance by notifying the sender of only failed packets within that window allowing the sender to simple resend only failed packets.

Again, the request for using SACK has to be initiated by the sender (the source client) within the connection establishment during the SYN phase of the handshake. This option is known as SACK-permitted. More information on how to use and implement SACK can be found within RFC-2018.

3. Scaling S3 Request Rates

On top of TCP Scaling and TCP SACK communications, S3 itself is already highly optimized for a very high request throughput. In July 2018, AWS made a significant change to these request rates as per the following AWS S3 announcement. Prior to this announcement, AWS recommended that you randomized prefixes within your bucket to help optimize performance, this is no longer required. You can now achieve exponential growth of request rate performance by using multiple prefixes within your bucket.

You are now able to achieve 3,500 PUT/POST/DELETE request per second along with 5,500 GET requests. These limitations are based on a single prefix, however, there are no limitations of the number of prefixes that can be used within an S3 bucket. As a result, if you had 20 prefixes you could reach 70,000 PUT/POST/DELETE and 110,000 GET requests per second within the same bucket.

S3 storage operates across a flat structure meaning that there is no hierarchical level of folder structures, you simply have a bucket and ALL objects are stored in a flat address space within that bucket. You are able to create folders and store objects within that folder, but these are not hierarchical, they are simply prefixes to the object which help make the object unique. For example, if you have the following 3 data objects within a single bucket:
Presentation/Meeting.ppt
Project/Plan.pdf
Stuart.jpg

The ‘Presentation’ folder acts as a prefix to identify the object and this pathname is known as the object key. Similarly with the ‘Project’ folder, again this is the prefix to the object. ‘Stuart.jpg’ does not have a prefix and so can be found within the root of the bucket itself.

Learn how to create your first Amazon S3 bucket in this Hands-on Lab.

4. Integration of Amazon CloudFront

Another method used to help optimization, which is by design, is to incorporate Amazon S3 with Amazon CloudFront. This works particularly well if the main request to your S3 data is a GET request. Amazon CloudFront is AWS’s content delivery network that speeds up the distribution of your static and dynamic content through its worldwide network of edge locations.

Normally when a user requests content from S3 (GET request), the request is routed to the S3 service and corresponding servers to return that content. However, if you’re using CloudFront in front of S3 then CloudFront can cache commonly requested objects. Therefore the GET request from the user is then routed to the closest edge location which provides the lowest latency to deliver the best performance and return the cached object. This also helps to reduce your AWS S3 costs by reducing the number of GET requests to your buckets.]

This post has explained a number of different options that are available to help you identify ways to optimize the performance when working with S3 objects.

For further information on some of the topics mentioned in this post please take a look at our library content.

Avatar

Written by

Stuart Scott

Stuart is the AWS content lead at Cloud Academy where he has created over 40 courses reaching tens of thousands of students. His content focuses heavily on cloud security and compliance, specifically on how to implement and configure AWS services to protect, monitor and secure customer data and their AWS environment.

Related Posts

Avatar
Michael Sheehy
— August 19, 2019

What Exactly Is a Cloud Architect and How Do You Become One?

One of the buzzwords surrounding the cloud that I'm sure you've heard is "Cloud Architect." In this article, I will outline my understanding of what a cloud architect does and I'll analyze the skills and certifications necessary to become one. I will also list some of the types of jobs ...

Read more
  • AWS
  • Cloud Computing
Avatar
Nitheesh Poojary
— August 16, 2019

Boto: Using Python to Automate AWS Services

Boto allows you to write scripts to automate things like starting AWS EC2 instances Boto is a Python package that provides programmatic connectivity to Amazon Web Services (AWS). AWS offers a range of services for dynamically scaling servers including the core compute service, Elastic...

Read more
  • Automated AWS Services
  • AWS
  • Boto
  • Python
Avatar
Andrew Larkin
— August 13, 2019

Content Roadmap: AZ-500, ITIL 4, MS-100, Google Cloud Associate Engineer, and More

Last month, Cloud Academy joined forces with QA, the UK’s largest B2B skills provider, and it put us in an excellent position to solve a massive skills gap problem. As a result of this collaboration, you will see our training library grow with additions from QA’s massive catalog of 500+...

Read more
  • AWS
  • Azure
  • content roadmap
  • Google Cloud Platform
Avatar
Adam Hawkins
— August 9, 2019

DevSecOps: How to Secure DevOps Environments

Security has been a friction point when discussing DevOps. This stems from the assumption that DevOps teams move too fast to handle security concerns. This makes sense if Information Security (InfoSec) is separate from the DevOps value stream, or if development velocity exceeds the band...

Read more
  • AWS
  • cloud security
  • DevOps
  • DevSecOps
  • Security
Avatar
Stefano Giacone
— August 8, 2019

Test Your Cloud Knowledge on AWS, Azure, or Google Cloud Platform

Cloud skills are in demand | In today's digital era, employers are constantly seeking skilled professionals with working knowledge of AWS, Azure, and Google Cloud Platform. According to the 2019 Trends in Cloud Transformation report by 451 Research: Business and IT transformations re...

Read more
  • AWS
  • Cloud skills
  • Google Cloud
  • Microsoft Azure
Avatar
Andrew Larkin
— August 7, 2019

Disadvantages of Cloud Computing

If you want to deliver digital services of any kind, you’ll need to estimate all types of resources, not the least of which are CPU, memory, storage, and network connectivity. Which resources you choose for your delivery —  cloud-based or local — is up to you. But you’ll definitely want...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud Platform
Joe Nemer
Joe Nemer
— August 6, 2019

Google Cloud vs AWS: A Comparison (or can they be compared?)

The "Google Cloud vs AWS" argument used to be a common discussion among our members, but is this still really a thing? You may already know that there are three major players in the public cloud platforms arena: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)...

Read more
  • AWS
  • Google Cloud Platform
  • Kubernetes
Avatar
Stuart Scott
— July 29, 2019

Deployment Orchestration with AWS Elastic Beanstalk

If you're responsible for the development and deployment of web applications within your AWS environment for your organization, then it's likely you've heard of AWS Elastic Beanstalk. If you are new to this service, or simply need to know a bit more about the service and the benefits th...

Read more
  • AWS
  • elastic beanstalk
Avatar
Stuart Scott
— July 26, 2019

How to Use & Install the AWS CLI

What is the AWS CLI? | The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services and implement a level of automation. If you’ve been using AWS for some time and feel...

Read more
  • AWS
  • AWS CLI
  • Command line interface
Alisha Reyes
Alisha Reyes
— July 22, 2019

Cloud Academy’s Blog Digest: July 2019

July has been a very exciting month for us at Cloud Academy. On July 10, we officially joined forces with QA, the UK’s largest B2B skills provider (read the announcement). Over the coming weeks, you will see additions from QA’s massive catalog of 500+ certification courses and 1500+ ins...

Read more
  • AWS
  • Azure
  • Cloud Academy
  • Cybersecurity
  • DevOps
  • Kubernetes
Avatar
Stuart Scott
— July 18, 2019

AWS Fundamentals: Understanding Compute, Storage, Database, Networking & Security

If you are just starting out on your journey toward mastering AWS cloud computing, then your first stop should be to understand the AWS fundamentals. This will enable you to get a solid foundation to then expand your knowledge across the entire AWS service catalog.   It can be both d...

Read more
  • AWS
  • Compute
  • Database
  • fundamentals
  • networking
  • Security
  • Storage
Avatar
Adam Hawkins
— July 17, 2019

How to Become a DevOps Engineer

The DevOps Handbook introduces DevOps as a framework for improving the process for converting a business hypothesis into a technology-enabled service that delivers value to the customer. This process is called the value stream. Accelerate finds that applying DevOps principles of flow, f...

Read more
  • AWS
  • AWS Certifications
  • DevOps
  • DevOps Foundation Certification
  • Engineer
  • Kubernetes