1. Home
  2. Training Library
  3. Storage (SAP-C02)

Creating a Lifecycle Configuration Demo

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW2m 16s
AWS Storage
2
Introduction to Amazon EFS
Amazon EC2
36
Amazon Elastic Block Store (EBS)
Optimizing Storage
40
41
AWS Backup
PREVIEW3m 50s
AWS Storage Gateway
Performance Factors Across AWS Storage Services
49

The course is part of this learning path

Start course
Overview
Difficulty
Intermediate
Duration
4h 13m
Students
41
Ratings
5/5
starstarstarstarstar
Description

This section of the AWS Certified Solutions Architect - Professional learning path introduces you to the core storage concepts and services relevant to the SAP-C02 exam. We start with an introduction to AWS storage services, understand the options available, and learn how to select and apply AWS storage services to meet specific requirements. 

Want more? Try a Lab Playground or do a Lab Challenge

Learning Objectives

  • Obtain an in-depth understanding of Amazon S3 - Simple Storage Service
  • Learn how to improve your security posture in S3
  • Get both a theoretical and practical understanding of EFS
  • Learn how to create an EFS file system, manage EFS security, and import data in EFS
  • Learn about EC2 storage and Elastic Block Store
  • Learn about the different performance factors associated with AWS storage services
Transcript

In this video, I’ll configure an S3 Lifecycle configuration using both the AWS Console and the AWS CLI. This S3 Lifecycle configuration will transition a subset of objects from an S3 bucket. 

In this S3 bucket, I have a combination of cat photos and beach photos. I currently access my cat photos all of the time. But for my beach photos, I find I only access them frequently the first month. After that period, I still want to keep them in storage for a year just in case anyone asks me about my vacation. However, no one ever asks, so I might as well store them in a low cost storage class after that first 30 days. And after a year, I’ll take a new vacation, so I can just delete these old beach photos. Using this information, I can create a lifecycle configuration based on my known access patterns. 

I’ll begin in the S3 console, and I’ll click on my bucket named lifecycle-configuration-demo-console. Inside my bucket, you can see my objects:  a wide range of cat photos, and then I have a folder called beach. If I click on this folder, I can then see my beach photos. 

To create a  lifecycle configuration to move my beach photos, I can click on the management tab, and under lifecycle rules, I can click “create lifecycle rule”. 

Here, I will enter the name, in this case I will call it BeachPhotosRule. Then I can choose to either limit it based on prefix or apply it to all photos. Since I want to leave my cat photos untouched, I will limit it based on a filter. And under prefix, I will type in my folder name beach/. 

Notice how I can choose to filter based on object tag, or minimum and maximum object size as well. However, for this demo, I’ll only be filtering by prefix. Moving down to the actions section, I can choose where my objects go. 

In this case, I don’t have versioning turned on with this bucket and I’m not using multipart uploads, so I only need to select the option to transition current versions of objects and the option to expire current versions of objects. 

To transition my beach photos, I can choose which storage class I’d like to move them to after 30 days. Since I’ll rarely be accessing my photos, an archival storage class is the best option. For this demo, I’ll choose Glacier Flexible Retrieval. And I’ll specify that this transition should happen 30 days after the object is created.  

It’s giving me a warning saying I will incur a per object fee to transition my objects to Glacier, as well as an additional object metadata storage charge, since the S3 service adds 32 KB of storage for object metadata that’s used when restoring objects. Even though it will add on additional cost, this is fine with me, so I will check the box and move on. 

Now I get to choose when to delete my objects. S3 Glacier Flexible Retrieval has a 90 day retention period, so as long as this period is longer than 90 days, I won’t incur an additional fee. Since I take my vacations yearly, I will choose 365 days to delete my photos. 

And that is all I need to set up. If I weren’t talking through all the options, this would take no time at all. And the best part, is it generates a timeline that I can review to ensure I made all the right choices before clicking “create rule”. 

Now, for those of you who don’t enjoy clicking around the AWS console: creating this using the CLI is just as easy. One thing about configuring lifecycle configurations using the CLI is that you have to provide a JSON template file defining your configuration. If you’re more comfortable or familiar with XML, you can always configure it in XML and use any one of the free XML to JSON converter websites to do the conversion for you. 

However, I have already created a JSON file, and I’ll run through it quickly. I have the ID, or name of the rule which is BeachPhotosRule. Then I have the filter, which uses the prefix of my folder beach/. The status is enabled. I have a transition to the S3 Glacier Flexible Retrieval storage class after 30 days. And I’m deleting my beach photos after 365 days. 

As you can see, it’s the same exact lifecycle configuration. Going to my terminal, I can then use the following command. 

And then dictate my file path for my lifecycle configuration, beginning with file:// and then the file name. And then I can specify which bucket to apply this to, using –bucket, and then my bucket name which, in this case is lifecycle-configuration-demo-cli

And it looks like it was successful. To verify, I can go back to the S3 console, find my demo-cli bucket. Click on the management tab, click on the lifecycle configuration, and verify that the details are correct. And that’s it for this one - see you next time! 

About the Author
Students
26715
Courses
21
Learning Paths
11

Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.