image
Database Storage and I/O Pricing

Contents

Course Introduction
1
Introduction
PREVIEW2m 22s
RDS vs. EC2
7
RDS vs. EC2
PREVIEW9m 33s
DynamoDB Accelerator

The course is part of this learning path

Database Storage and I/O Pricing
Difficulty
Intermediate
Duration
4h 21m
Students
253
Ratings
5/5
starstarstarstarstar
Description

This section of the AWS Certified Solutions Architect - Professional learning path introduces you to the AWS database services relevant to the SAP-C02 exam. We then understand the service options available and learn how to select and apply AWS database services to meet specific design scenarios relevant to the AWS Certified Solutions Architect - Professional exam. 

Want more? Try a Lab Playground or do a Lab Challenge

Learning Objectives

  • Understand the various database services that can be used when building cloud solutions on AWS
  • Learn how to build databases using Amazon RDS, DynamoDB, Redshift, DocumentDB, Keyspaces, and QLDB
  • Learn how to create ElastiCache and Neptune clusters
  • Understand which AWS database service to choose based on your requirements
  • Discover how to use automation to deploy databases in AWS
  • Learn about data lakes and how to build a data lake in AWS
Transcript

Hello and welcome to this lecture where I shall be looking at how both primary storage and I/O pricing is configured.

We have looked at the pricing options for your database compute instances themselves, I now want to focus on the storage aspects of your databases and how these charges are calculated across the different DB engines.

MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server all use Elastic Block Store (EBS) volumes for both data and log storage. However, Aurora on the other hand uses a shared cluster storage architecture and does not use EBS.

So let me first look at the pricing for the majority of the DB engines using EBS.

When configuring your storage requirements, RDS supports 3 different types:

  • General Purpose SSD Storage
  • Provisioned IOPS (SSD) storage
  • Magnetic Storage

Let’s take a closer look at each, starting with General purpose SSD storage:

  • General Purpose SSD storage: This is a good option for a broad range of use cases which provides single-digit millisecond latencies and offers a cost-effective storage solution.  The minimum SSD storage for your primary data set is 20 GiB, with a maximum of 64 TiB for MySQL, PostgreSQL, MariaDB, and Oracle, however, the maximum for SQL Server is 16 TiB.  When using SSD, you are charged for the amount of storage provisioned and not for the number of I/Os processed
  • Provisioned IOPS (SSD) storage: This option is great for when you have workloads that operate at a very high I/O.  You can provision a minimum of 8000 IOPS, and a maximum of 80000 for MySQL, PostgreSQL, MariaDB, and Oracle, but the maximum for SQL Server is 40000.  In addition to being able to provision the IOPS needed for your workload, the minimum storage for your primary data set is 100 GiB, with a maximum of 64 TiB for MySQL, PostgreSQL, MariaDB and Oracle, and 16 TiB for SQL Server.  The charges for this option are based upon the amount of storage provisioned in addition to the IOPS throughput selected, again, you are charged not for the total number of I/Os processed.
  • Magnetic storage: Finally magnetic storage is simply supported to provide backward compatibility, and so AWS recommends that you select General Purpose instead

The following screenshot shows the configuration screen when determining your storage requirements for MySQL.  In this example Provisioned IOPS (SSD) has been selected as the storage, with a minimum of 100 GiB of primary storage, and a 1000 provisioned IOPS as throughput.

Let’s now take a look at the pricing structure for the storage.

The costs for your database storage has 2 different price points depending on whether it has been configured as a single-AZ or Multi-AZ deployment.  Much like the instance pricing, the Multi-AZ is typically twice the value of single-AZ deployment.  

Each of the storage types, General Purpose SSD, Provisioned IOPS (SSD), and Magnetic all come at a different price.  For each type of storage used, it is priced at per GB-Month, but what is the metric of GB-Month exactly?  

Essentially this defines how many GBs of storage have been provisioned and for how long.  Let me give an example.  Assume we are working on a 30 day month and we receive a bill for 10GB-Months of storage, this could be the result of the following scenarios:

  • You have 300-GB of storage running for just 24 hours 
  • You have 10-GB of storage running for 720 hours 
  • You have 40-GB of storage running for 180 hours

These are calculated using the following formula: 

Total provisioned storage / (720/number of hours running)

Where 720 = number of hours in a 30-day month

So, looking at the examples above the calculations would be as follows:

  • 300 / (720/24)
  • 10 / (720/720)
  • 40 / (720/180)

The following screenshots show the costs for each of the storage types for the MySQL DB engine in the London region under a single-az deployment:

I now want to revisit the storage for the Aurora DB engine.  As I explained earlier, Aurora uses a shared cluster storage architecture which is managed by the service itself.  When configuring your Aurora database in the console, the option to configure and select storage options like we saw previously does not even exist.  Your storage will scale automatically as your database grows.  As a result the pricing structure for your Aurora database is priced differently.

Again, the pricing metric used is GB-Months, in addition to the actual number of I/Os processed, which are billed per million requests.  The great thing about using Aurora is that you are only billed for the storage used and IOs processed, whereas with MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server you are billed for the storage provisioned regardless of if your use all of it or just a part of it.  

As an example, the following shows the costs for both the used in GB-Months, plus the I/Os processed (per million requests) within the London region.

 

About the Author
Students
61802
Courses
28
Learning Paths
25

Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.