1. Home
  2. Training Library
  3. Storage (SAP-C02)

Scaling Access to Shared Buckets with S3 Access Points


Course Introduction
AWS Storage
Introduction to Amazon EFS
Amazon EC2
Amazon Elastic Block Store (EBS)
Optimizing Storage
AWS Backup
AWS Storage Gateway
Performance Factors Across AWS Storage Services

The course is part of this learning path

Start course
4h 13m

This section of the AWS Certified Solutions Architect - Professional learning path introduces you to the core storage concepts and services relevant to the SAP-C02 exam. We start with an introduction to AWS storage services, understand the options available, and learn how to select and apply AWS storage services to meet specific requirements. 

Want more? Try a Lab Playground or do a Lab Challenge

Learning Objectives

  • Obtain an in-depth understanding of Amazon S3 - Simple Storage Service
  • Learn how to improve your security posture in S3
  • Get both a theoretical and practical understanding of EFS
  • Learn how to create an EFS file system, manage EFS security, and import data in EFS
  • Learn about EC2 storage and Elastic Block Store
  • Learn about the different performance factors associated with AWS storage services

Hello and welcome to this lecture which will look at S3 access points and how they can be used to share buckets at scale with ease. A big use case of using Amazon S3 is to store huge datasets of data that can be shared and accessed by different services, applications, and teams.

Managing access to this data through the use of IAM policies, bucket policies, and ACLs can lead to an administrative burden when working with access at scale, where each requester might require different permissions to your data, which can lead to large policies which can be confusing to decipher.

To help alleviate this issue, AWS released Amazon S3 access points, which helped to simplify the management of controlling and managing shared data at scale on S3. And S3 access point can be created configured and attached to a single bucket.

You can't use the same access points across multiple buckets, but you can have multiple access points attached to a single bucket. This allows you to create a different access point, each with its own set of permissions for each application or team that requires access to your shared bucket where your data resides.

Access Points only allow you to perform object operations only, for example, S3 GetObject and S3 PutObject. But it's not possible to use bucket operations, such as S3 DeleteBucket. Permissions assigned to each S3 access point work in conjunction with the underlying bucket policy, as I will explain shortly, and allow you to configure different permissions for different access points.

As a part of the access point configuration, you can decide if you only want to accept requests from a specific VPC, and therefore restrict access to your own private network. Alternatively, you can allow access from anywhere outside of your VPC.

If you do not need to open up access to your bucket to those outside of your VPC, so essentially public access, then you can configure different settings to restrict that public access for your access point. By default, for all newly created access points, all public access is blocked. In the next lecture, I will be discussing public access settings in detail.

One important point to note is that whatever controls and restrictions you configure for an access point, it will not affect any other access behavior from other controls you already have in place on the bucket. So your access point restrictions effect only those connections to your bucket that are accessed via the access point.

Another configurable and optional element of an access point is the Access Point Policy. And this allows you to add a JSON policy to define permissions when using the access point. When granting permissions within the access point policy, they can only be as permissive as the associated underlying bucket policy allows. This means that if the user Stuart was given permissions of S3 PutObject with an access policy, then the underlying bucket policy would also need to grant your access of S3 PutObjects. As a result, if you're going to use access points to help you maintain security at scale for your shared data set, then it is best to add a policy to the bucket to delegate access control to the access points.

The bucket policies shown here will allow full access to any access point associated with the bucket owners account to the S3 deepdive bucket and it's objects. As a result all access to this bucket can now be determined by the access policies that are attached to it.

Much like the S3 buckets access points also have an Amazon Resource Name an ARN depicting the region account number and access point name. As a result, it's not possible to have two access points with the same name in the same region.

Once you have created an access point for your bucket with predefined security controls, you can connect to your bucket access point via the AWS Management Console, AWS SDKs or the S3 REST APIs.

About the Author
Learning Paths

Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.