Skip to main content

AWS S3 Lifecycle Policies: Simple Storage Service Management

Understanding the complicated policies of ASW S3 makes you a superior candidate and an all-around better person.

One of the most popular products from Amazon Web Service (AWS), is Simple Storage Service, popularly abbreviated as S3 . This service provides a durable, highly-available and inexpensive object storage for any kind of object — of any size. Behind S3’s durability and high-availability (HA), there are great engineering practices along with, redundancy and implementation of versioning that makes it very appealing as a web-scale storage service.
Everyone knows about Amazon S3, so discussing here wouldn’t serve us well. Rather, we are going to discuss how objects are stored, and how life-cyles of objects are maintained. I won’t dive into ASW S3 Lifecycle security in this post either. Security represents a crucial part of the developer’s responsibility is an important topic, so I suggest you read Stuart Scott’s post from this winter S3 Lifecycle Policies, Versioning & Encryption: AWS Security.

Storing and maintaining lifecycle objects in AWS S3

We have buckets in S3 and we store objects in them.

  • How are these objects managed?
  • How are the DR & HA achieved?
  • How do objects underneath the storage layer behave when a PUT or DELETE operation is performed?

Let’s talk about S3 Objects and their lifecycle policies.
Amazon S3 achieves high availability by replicating data across multiple servers within Amazon’s data centers. If a PUT request is successful, your data is safely stored. However, information about the changes must replicate across Amazon S3. Also, S3 keeps multiple versions of the Object to achieve HA. Enabling or disabling versioning of one object within the bucket is optional. If you enable versioning, you can protect your objects from accidental deletion or being overwritten because you have the option of retrieving older versions of them.
Object versioning can be used in combination with Object Lifecycle Management, which allows you the option of customizing your data retention requirements while controlling your storage costs.
When you PUT an object in a versioning-enabled bucket, the noncurrent version is not overwritten. Rather, when a new version of a file or an object is PUT into a bucket that already contains an object with the same name, the original object remains in the bucket, and Amazon S3 generates a new version ID. Amazon S3 then adds the newer version to the bucket. This service is automatically performed by S3 so that, as a user, your only concern is enabling and disabling the versioning in the bucket.
Amazon S3 also provides resources for managing lifecycle by user need. For example, if you want to move less frequently accessed data to Glacier, or set a rule to delete the file (e.g. old log files of an application stored in a bucket) after a specified interval of time, you can easily automate the process. AWS allows the enabling of up-to 100 lifecycle rules for achieving control of your objects in S3 buckets.

Configuring Amazon S3 Lifecycle:

Amazon S3 Lifecycle configurations are provided by means of XML. A typical configuration looks like this:

<LifecycleConfiguration>
  <Rule>
    <ID> cloudacademy-image-rule</ID>
    <Prefix>cloudacademyblogimg/</Prefix>
    <Status>Enabled</Status>
    <Transition>
      <Days>90</Days>
      <StorageClass>GLACIER</StorageClass>
    </Transition>
    <Expiration>
      <Days>365</Days>
    </Expiration>
  </Rule>
</LifecycleConfiguration>

Here we have defined an S3 lifecycle configuration for objects in a bucket. We have images in a bucket stored in the folder named cloudacademyblogimg and we want to move them to GLACIER storage every 30 days. Glacier is another useful service from Amazon allowing inexpensive, highly durable storage services for archiving huge volumes of data. After a year of storage, we will likely delete it.  Let’s look at the various metadata associated with it:

  • LifecycleConfiguration – A LifecycleConfiguration defines a rule that applies to an object with a key.
  • ID– The ID element uniquely identifies a rule. A lifecycle configuration can have up to 1000 rules.
  • Prefix – Prefix are the Object keys. If in an S3 bucket named cloudacademyblog we have a folder called cloudacademyblogimg, and an image is named S3_thumbnail.gif inside that folder, then the Object key is cloudacademyblogimg/S3_thumbnail.gif. If you do not specify the Prefix, the rule will be applied to all the objects in the bucket.
  • Status – Enabled or Disabled.
  • Transition – Transition is one of the lifecycle actions of S3. This transition action specifies you want to move the objects from one storage class to another. There are three storage classes in S3 named, STANDARD, STANDARD-IA (IA denotes Infrequent Access) and GLACIER.  Here we have mentioned GLACIER, where files will be moved after 90 days. You can either specify the number of days or a specific date (but you cannot use both).
  • Expiration – The Expiration action specifies when the objects expire. In this case, we have specified a period of 365 days, or one year. There are several considerations for the expiration rules. These rules might seem a bit confusing at first, so please be patient and read along. They will eventually make sense and you can refer back to them frequently. A good understanding of this concept will take you a long way while working with S3.
    • In a non-versioned bucket, the Expiration action results in Amazon S3 permanently removing the object.
    • The expiration action applies only to the current version. In a versioned bucket, S3 will not take any action if there are one or more object versions and the delete marker is the current version.
    • If the current object version is the only object version and it is also a delete marker, S3 will remove the expired object delete marker.
    • If current object version is not a delete marker, Amazon S3 adds a delete marker with a unique version ID, making the current version noncurrent, and the delete marker the current version.
    • For non-current version objects, there is an action named NoncurrentVersionTransition action element, which is used to specify how long (from the time the objects became noncurrent) users want the objects to remain in the current storage class before Amazon S3 transitions them to the specified storage class.
    • There is also a NoncurrentVersionExpiration action for non-current version objects that specify how long (from the time the objects became noncurrent) user want to retain noncurrent object versions before Amazon S3 permanently removes them. In this case, the deleted object cannot be recovered.
    • Starting from March 16th, 2016, Amazon S3 introduced “incomplete multipart upload expiration policy”.
      • If a multi-part upload is incomplete, the partial upload does not appear when users list their objects by default. However, this does incur storage charges.
      • Previously, you needed to manually cancel the multi-part upload to remove partial uploads.
      • Now, users can set a lifecycle policy to automatically expire and remove incomplete multi-part uploads after a predefined number of days.
      • The policy applies to everything in a bucket, including existing partial uploads. The rule looks like this:
<LifecycleConfiguration>
<Rule>
<ID>multipart-upload-rule</ID>
<Prefix></Prefix>
<Status>Enabled</Status>
<AbortIncompleteMultipartUpload>
<DaysAfterInitiation>3</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>
  • Always remember that the “last modified date” of an object is treated as the starting date for the lifecycle of that object in S3. If you replace the object, the new date is considered the creation date.
  • If an object is marked as non-current, due to it being overwritten or deleted, S3 will take action on that object(s) since it transitioned to non-current.
  • You can specify multiple rules for different lifecycle action on objects. Take a look at this rule:
    • One rule acts on where the images in cloudacademyblogimg are moved to GLACIER after 30 days and removed after 365 days.
    • The other rule specifies that the logs-in will be transitioned to Standard Infrequent access storage class (STANDARD_IA), after 7 days and deleted after 30 days.
<LifecycleConfiguration>
    <Rule>
        <ID>CAImgRule</ID>
        <Prefix>cloudacademyblogimg/</Prefix>
        <Status>Enabled</Status>
        <Transition>
           <Days>90</Days>
           <StorageClass>GLACIER</StorageClass>
        </Transition>
        <Expiration>
             <Days>365</Days>
        </Expiration>
    </Rule>
    <Rule>
        <ID> CALogRule</ID>
        <Prefix> cloudacademylogs/</Prefix>
        <Status>Enabled</Status>
        <Transition>
           <Days>30</Days>
           <StorageClass>STANDARD_IA</StorageClass>
        </Transition>
        <Expiration>
             <Days>30</Days>
        </Expiration>
    </Rule>
</LifecycleConfiguration>
  • The lifecycle rule is applied through AWS CLI as follows:
aws s3api put-bucket-lifecycle  --bucket bucketname --lifecycle-configuration filename-containing-lifecycle-configuration

Applying Lifecycle rules in AWS Management Console:

  • Login to the S3 in AWS Management Console.
  • Navigate to the bucket that you want to apply Lifecycle rules.
1
  • Click on the Lifecycle link on the right-hand side of the Properties tab, and click on “Add rule”.
Add Rule
(Add Rule)
  • You can either apply the rule to the whole bucket or any folder (prefix). We selected cpimg/ to apply Lifecycle rules in this example. Click on “Configure Rule”.
3
(Lifecycle Rule Naming)
  • We have provided 30 days for Transition and 365 days for Expiration to the objects. We also specified 2 days for incomplete multipart uploads.
4
(Applying Lifecycle Expiration & Transition )
  • Then Review, Create & Activate Rule.
5
  • If the rule does not contain any errors, it is displayed in the Lifecycle pane.
6

Conclusion:

Mastering ASW S3 policies and exceptions requires considerable energy. Cloud Academy can help. They offer a suite of products for developers learning ASW S3.
Screen Shot 2016-05-02 at 3.40.35 PM
There are video courses, hands-on learning paths, and quizzes. Each component supports a professional approach to practical learning.
Video courses are created and narrated by working professional ASW developers who understand time constraints and deliver the information learners need for passing exams and, more importantly, excelling in a critical IT role.
Screen Shot 2016-05-02 at 3.41.03 PM
People learn differently. Some students love quizzes because they help push information into a higher-level of mental storage. Others use quizzes for testing themselves and determining areas of strength and weakness for a personal approach. Cloud Academy Quizzes offer duel modes for maximum learning flexibility:
Screen Shot 2016-05-02 at 3.40.21 PM
Most technical people agree project-based learning resonates most powerfully with them. Cloud Academy offers hands-on labs in an actual AWS environment. Students may experiment in a live ASW world without leaving the Cloud Academy site or signing up for services with AWS. This builds confidence and reinforces knowledge.
Screen Shot 2016-05-02 at 3.41.26 PM
When you review this post, you’ll see we used the AWS Management Console to create and activate a rule. In a professional setting, a developer will likely require far more complex rules.  This is more an opportunity than a challenge because there are tremendously good learning resources around AWS S3. Treat yourself to a free 7-day trial subscription to Cloud Academy where the above resources are all available. Training, personal determination, and AWS S3 documentation present a winning combination for career advancement.

Written by

Cloud Computing and Big Data professional with 10 years of experience in pre-sales, architecture, design, build and troubleshooting with best engineering practices.Specialities: Cloud Computing - AWS, DevOps(Chef), Hadoop Ecosystem, Storm & Kafka, ELK Stack, NoSQL, Java, Spring, Hibernate, Web Service

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless