Increasing Your Security Posture when Using Amazon S3
S3 Encryption Mechanisms
Amazon S3 Lifecycle Configurations
Introduction to Amazon EFS
EFS in Practice
Amazon Elastic Block Store (EBS)
AWS Storage Gateway
Performance Factors Across AWS Storage Services
The course is part of this learning path
This section of the AWS Certified Solutions Architect - Professional learning path introduces you to the core storage concepts and services relevant to the SAP-C02 exam. We start with an introduction to AWS storage services, understand the options available, and learn how to select and apply AWS storage services to meet specific requirements.
- Obtain an in-depth understanding of Amazon S3 - Simple Storage Service
- Learn how to improve your security posture in S3
- Get both a theoretical and practical understanding of EFS
- Learn how to create an EFS file system, manage EFS security, and import data in EFS
- Learn about EC2 storage and Elastic Block Store
- Learn about the different performance factors associated with AWS storage services
An S3 Lifecycle configuration is technically an XML file. While you won’t need to know XML to create lifecycle configurations in the AWS console, the XML format is helpful in understanding the various components to see how it all works under the hood. For that reason, I’ll be describing the anatomy of a lifecycle configuration using XML throughout this video.
Each lifecycle configuration contains a set of rules. Each rule is broken up into four components: ID, Filters, Status, and Actions.
The ID uniquely identifies the lifecycle rule - you can consider this the name of the rule. This is important, as one lifecycle configuration can have up to 1000 rules, and the ID can help you keep track of what rule does what.
The filters section defines WHICH objects in your bucket you’d like to take action on. You can choose to apply actions to all objects or a subset of objects in a bucket. If you choose a subset of objects, you can filter based on prefix, object tag, or object size. Or to be very granular, you could filter based on a combination of these attributes. For example, you can create a filter that transitions all objects with the prefix ProjectBlue/, if they also are tagged with the Classification tag value “Secret”. Notice that when you combine multiple filters, you have to use the keyword “And” in XML.
For object size, I can transition or expire objects if they’re greater than a specific size, less than a specific size, or if they’re in a range between two size bounds. For example, I can create a filter, where I’m transitioning my objects if they’re larger than 500 Bytes, but smaller than 10,000 Bytes. Keep in mind that the maximum filter size is 5TB.
The next component is status. With the status field, you can enable and disable each lifecycle rule. This can be helpful when you’re testing lifecycle configurations out, as you figure out what the best rules for your workload are. Once you change the status to “disabled”, S3 won’t run the actions defined in that rule - essentially stopping the lifecycle action. And when you’re ready to run those actions on your objects again, you can always change the status back to enabled.
The last, and arguably the most important component is the actions section. This is where you define WHERE you want your objects to move to - are you transitioning them to another storage class or are you deleting them?
There are six main actions that you can use: Transition, expiration, NoncurrentVersionTransition, NoncurrentVersionExpiration, ExpiredObjectDeleteMarker, and the AbortIncompleteMultipartUpload action. The first two actions are transition actions and expiration actions. Transition actions enable you to move data automatically between S3 storage classes.
Expiration actions enable you to automate the deletion of your objects in S3. For both transition and expiration actions, you can define when to move or delete these objects based on object age. And age is based on when the object was last created or modified.
Here’s an example of how both of these actions are structured in XML. In this example, the first action transitions all objects that are prefixed with ProjectBlue/ to the S3 Glacier Flexible Retrieval storage class 365 days after they were created. The second action says after 2,550 days - which is 7 years, delete the objects prefixed with ProjectBlue/ because they aren’t needed any longer.
If you have versioning turned on for your bucket, transition actions and expiration actions only work for the current version of your object. If you want to transition noncurrent versions of your object, you must use the NoncurrentVersionTransition action.
Similarly, if you want to delete noncurrent versions of your object, you must use the NoncurrentVersionExpiration action. For both the NoncurrentVersionTransition and NoncurrentVersionExpiration actions, you can define when to move or delete objects based on two things.
- First is the number of days since the object has been noncurrent, which Amazon calculates as the number of days since the object was overwritten or deleted.
- And second, is the maximum number of versions to retain. This is helpful when you want to save a few versions to rollback to for data protection, while removing old versions of your object to save on storage spend.
Here’s an example of a lifecycle configuration that uses both a noncurrent version transition and a noncurrent version expiration action. In this rule, all noncurrent versions are moved to S3 Standard - Infrequent Access 30 days after they become noncurrent. In addition, it deletes all noncurrent versions 730 days, or two years, after they become noncurrent, while retaining the 3 latest versions. Keep in mind, you can choose to retain any number of versions between 1 and 100.
There are additional special actions that you can take as well. One of them is the Expired object delete marker action. This is helpful if you have objects that have zero versions, that only have a delete marker left. This is referred to as the expired object delete marker, and you can use this action to remove them.
And last, there is the AbortIncompleteMultipartUpload action. If you have incomplete multipart uploads that you need to clean up, you should use this action. With this action, you can specify the maximum time, in days, that your multipart uploads can remain in progress.
For example, you can specify that your multipart uploads can remain in progress for 14 days. If the upload does not complete in 14 days, S3 will delete it. In summary, S3 lifecycle configurations are made up of an ID, filters, status, and actions. That’s all for this one!
Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.