Components of a Lifecycle Configuration


SAA-C03 Introduction
AWS Storage
Introduction to Amazon EFS
Amazon EC2
Amazon Elastic Block Store (EBS)
Optimizing Storage
AWS Backup
Running Operations with the Snow Family
Data Transfers with AWS DataSync
SAA-C03 Review
Storage Summary
Components of a Lifecycle Configuration
3h 30m

This section of the Solution Architect Associate learning path introduces you to the core storage concepts and services relevant to the SAA-C03 exam. We start with an introduction to the AWS storage services, understand the options available and learn how to select and apply AWS storage services to meet specific requirements. 

Want more? Try a lab playground or do a Lab Challenge

Learning Objectives

  • Obtain an in-depth understanding of Amazon S3 - Simple Storage Service
  • Get both a theoretical and practical understanding of EFS
  • Learn how to create an EFS file system, manage EFS security, and import data in EFS
  • Learn about EC2 storage and Elastic Block Store
  • Learn about the services available in AWS to optimize your storage
  • Learn how to use AWS DataSync to move data between storage systems and AWS storage services

An S3 Lifecycle configuration is technically an XML file. While you won’t need to know XML to create lifecycle configurations in the AWS console, the XML format is helpful in understanding the various components to see how it all works under the hood. For that reason, I’ll be describing the anatomy of a lifecycle configuration using XML throughout this video. 

Each lifecycle configuration contains a set of rules. Each rule is broken up into four components: ID, Filters, Status, and Actions. 

The ID uniquely identifies the lifecycle rule - you can consider this the name of the rule. This is important, as one lifecycle configuration can have up to 1000 rules, and the ID can help you keep track of what rule does what. 

The filters section defines WHICH objects in your bucket you’d like to take action on. You can choose to apply actions to all objects or a subset of objects in a bucket. If you choose a subset of objects, you can filter based on prefix, object tag, or object size. Or to be very granular, you could filter based on a combination of these attributes. For example, you can create a filter that transitions all objects with the prefix ProjectBlue/, if they also are tagged with the Classification tag value “Secret”. Notice that when you combine multiple filters, you have to use the keyword “And” in XML. 

For object size, I can transition or expire objects if they’re greater than a specific size, less than a specific size, or if they’re in a range between two size bounds. For example, I can create a filter, where I’m transitioning my objects if they’re larger than 500 Bytes, but smaller than 10,000 Bytes. Keep in mind that the maximum filter size is 5TB.

The next component is status. With the status field, you can enable and disable each lifecycle rule. This can be helpful when you’re testing lifecycle configurations out, as you figure out what the best rules for your workload are. Once you change the status to “disabled”, S3 won’t run the actions defined in that rule - essentially stopping the lifecycle action. And when you’re ready to run those actions on your objects again, you can always change the status back to enabled. 

The last, and arguably the most important component is the actions section. This is where you define WHERE you want your objects to move to - are you transitioning them to another storage class or are you deleting them? 

There are six main actions that you can use: Transition, expiration, NoncurrentVersionTransition, NoncurrentVersionExpiration, ExpiredObjectDeleteMarker, and the AbortIncompleteMultipartUpload action. The first two actions are transition actions and expiration actions. Transition actions enable you to move data automatically between S3 storage classes. 

Expiration actions enable you to automate the deletion of your objects in S3. For both transition and expiration actions, you can define when to move or delete these objects based on object age. And age is based on when the object was last created or modified. 

Here’s an example of how both of these actions are structured in XML. In this example, the first action transitions all objects that are prefixed with ProjectBlue/ to the S3 Glacier Flexible Retrieval storage class 365 days after they were created. The second action says after 2,550 days - which is 7 years, delete the objects prefixed with ProjectBlue/ because they aren’t needed any longer. 

If you have versioning turned on for your bucket, transition actions and expiration actions only work for the current version of your object. If you want to transition noncurrent versions of your object, you must use the NoncurrentVersionTransition action. 

Similarly, if you want to delete noncurrent versions of your object, you must use the NoncurrentVersionExpiration action. For both the NoncurrentVersionTransition and NoncurrentVersionExpiration actions, you can define when to move or delete objects based on two things.  

  • First is the number of days since the object has been noncurrent, which Amazon calculates as the number of days since the object was overwritten or deleted. 
  • And second, is the maximum number of versions to retain. This is helpful when you want to save a few versions to rollback to for data protection, while removing old versions of your object to save on storage spend.

Here’s an example of a lifecycle configuration that uses both a noncurrent version transition and a noncurrent version expiration action. In this rule, all noncurrent versions are moved to S3 Standard - Infrequent Access 30 days after they become noncurrent. In addition, it deletes all noncurrent versions 730 days, or two years, after they become noncurrent, while retaining the 3 latest versions. Keep in mind, you can choose to retain any number of versions between 1 and 100. 

There are additional special actions that you can take as well. One of them is the Expired object delete marker action. This is helpful if you have objects that have zero versions, that only have a delete marker left. This is referred to as the expired object delete marker, and you can use this action to remove them. 

And last, there is the AbortIncompleteMultipartUpload action. If you have incomplete multipart uploads that you need to clean up, you should use this action. With this action, you can specify the maximum time, in days, that your multipart uploads can remain in progress. 

For example, you can specify that your multipart uploads can remain in progress for 14 days. If the upload does not complete in 14 days, S3 will delete it. In summary, S3 lifecycle configurations are made up of an ID, filters, status, and actions. That’s all for this one!

About the Author
Learning Paths

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.