image
Amazon Elastic File System (EFS)

Contents

Course Introduction
1
Introduction
PREVIEW2m 18s
Cost Management
3
Credits
PREVIEW1m 52s
5
Reports
PREVIEW1m 30s
7
Budgets
6m 51s
Improve Planning and Cost Control with AWS Budgets
AWS Cost Management: Tagging
13
Tagging
PREVIEW6m 51s
Understanding & Optimizing Storage Costs with AWS Storage Services
16
Amazon S3 and Glacier
PREVIEW16m 56s
18
20
AWS Backup
PREVIEW3m 50s
Using Instance Scheduler to Optimize Resource Cost

The course is part of this learning path

Amazon Elastic File System (EFS)
Difficulty
Intermediate
Duration
2h 33m
Students
152
Ratings
5/5
starstarstarstarstar
Description

This section of the AWS Certified Solutions Architect - Professional learning path introduces you to cost management concepts and services relevant to the SAP-C02 exam. By the end of this section, you will know how to select and apply AWS services to optimize cost in scenarios relevant to the AWS Certified Solutions Architect - Professional exam. 

Want more? Try a Lab Playground or do a Lab Challenge

Learning Objectives

  • Learn how to improve planning and cost control with AWS Budgets
  • Understand how to optimize storage costs
  • Discover AWS services that allow you to monitor for underutilized resources
  • Learn how the AWS Instance Scheduler may be used to optimize resource costs
Transcript

Hello and welcome to this lecture covering the elastic file system. EFS is a fully managed, highly available and durable service that allows you to create shared file systems that can easily scale to petabytes in size with low latency access. EFS has been designed to maintain a high level of throughput in addition to low latency access response, and these performance factors make EFS a desirable storage solution for a wide variety of workloads, and use cases and can meet the demands of tens, hundreds or even thousands of EC2 instances concurrently. 

Being a managed service, there is no need for you to provision any file servers to manage the storage elements or provide any maintenance of those servers. This makes it a very simple option to provide file-level storage within your environment. It uses standard operating system APIs, so any application that is designed to work with standard operating system APIs will work with EFS.

It supports both NFS versions 4.1 and 4.0 and uses standard file system semantics such as strong consistency and file locking. It's replicated across availability zones in a single region making EFS a highly reliable storage service.

As the file system can be accessed by multiple instances, it makes it a very good storage option for applications that scale across multiple instances allowing for parallel access of data. The EFS file system is also regional, and so any application deployments that span across multiple availability zones can all access the same file systems providing a level of high availability of your application storage layer.

Much like Amazon S3 and Glacier, there are different storage classes within EFS, not as many, however, they can help you save a lot of money if used correctly. There are currently 2 different storage classes, these being: EFS Standard - The default storage used when using EFS And EFS Infrequent Access (EFS-IA).

As you can expect, the EFS-IA storage class is designed for files that are accessed less frequently than those recommended for the standard storage class. As a result, a considerable cost saving can be achieved of up to 92%.

The result of the cheaper storage means that there is an increased first-byte latency impact when both reading and writing data in this class when compared to that of Standard. The costs are also managed slightly differently. When using IA, you are charged for the amount of storage space used, which is cheaper than that compared to Standard. However, with EFS-IA, you are also charged for each read and write you make to the storage class. This helps to ensure that you only use this storage class for data that is not accessed very frequently, for example, data that might be required for auditing purposes or historical analysis.

With the Standard storage class, you are only charged on the amount of storage space used per month. Both storage classes are available in all regions where EFS is supported. And importantly, they both provide the same level of availability and durability.

As you can see from this table taken from the London region, EFS-IA is considerably cheaper, and also shows the additional charges for requests.

EFS also offers lifecycle management and when enabled, EFS will automatically move data between the Standard storage class and the EFS-IA storage class. This process occurs when a file has not been read or written to for a set period of days, which is configurable, and your options for this period range include 14, 30, 60, or 90 days.

Depending on your selection, EFS will move the data to the IA storage class to save on cost once that period has been met. However, as soon as that same file is accessed again, the timer is reset, and it is moved back to the Standard storage class. Again, if it has not been accessed for a further period, it will then be moved back to IA. Every time a file is accessed, its lifecycle management timer is reset. The only exceptions to data not being moved to the IA storage class is for any files that are below 128K in size and any metadata of your files, which will all remain in the Standard storage class.

Throughput modes In addition to being charged for your storage class, EFS also offers 2 different rates of throughput, which also come at a different cost.

These two different throughput modes that are offered are: Bursting Throughput and Provisioned Throughput.

Data throughput patterns on file systems generally go through periods of relatively low activity with occasional spikes in burst usage, and EFS provisions throughput capacity to help manage this random activity of high peaks.

With the Bursting Throughput mode, which is the default mode, the amount of throughput scales as your file system grows. So the more you store, the more throughput is available to you. Using the bursting throughput mode does not incur any additional charges and you have a baseline rate of 50 KB/s per GB of throughput that comes included with the price you pay for your EFS standard storage.

Provisioned Throughput allows you to burst above your allocated allowance, which is based upon your file system size. So if your file system was relatively small but the use case for your file system required a high throughput rate, then the default bursting throughput options may not be able to process your request quickly enough. In this instance, you would need to use provisioned throughput. However, this option does incur additional charges where you will need to pay for any bursting above the default capacity allowed from the standard bursting throughput.

In a way, Amazon EFS offers simpler cost options over Amazon S3, largely due to the fact that there are fewer storage classes and no charges for data retrieval. In the interest of cost-optimization, it's generally recommended that you implement the lifecycle policies to help you reduce your file system costs by moving data between standard and the infrequent access storage classes as the majority of files are accessed rarely with only a small percentage of files being accessed regularly. From a data transfer perspective, it is recommended that you use AWS DataSync which is a managed service and will help you to manage the transfer of data into and out of EFS and is charged at a flat per-GB rate. For more information on the Amazon Elastic File Systems, you can take our existing course here.

 

Lectures

About the Author
Students
62685
Courses
28
Learning Paths
25

Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.