RDS vs. EC2
Amazon RDS Costs
Data Lakes in AWS
The course is part of this learning path
This section of the Solution Architect Associate learning path introduces you to the AWS database services relevant to the SAA-C03 exam. We then understand the service options available and learn how to select and apply AWS database services to meet specific design scenarios relevant to the Solution Architect Associate exam.
Want more? Try a lab playground or do a Lab Challenge!
- Understand the various database services that can be used when building cloud solutions on AWS
- Learn how to build databases using Amazon RDS, DynamoDB, Redshift, DocumentDB, Keyspaces, and QLDB
- Learn how to create Elasticache and Neptune clusters
- Understand AWS database costs
- Learn about data lakes and how to build a data lake in AWS
Hello and welcome to this lecture where I shall be looking at how both primary storage and I/O pricing is configured.
We have looked at the pricing options for your database compute instances themselves, I now want to focus on the storage aspects of your databases and how these charges are calculated across the different DB engines.
MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server all use Elastic Block Store (EBS) volumes for both data and log storage. However, Aurora on the other hand uses a shared cluster storage architecture and does not use EBS.
So let me first look at the pricing for the majority of the DB engines using EBS.
When configuring your storage requirements, RDS supports 3 different types:
- General Purpose SSD Storage
- Provisioned IOPS (SSD) storage
- Magnetic Storage
Let’s take a closer look at each, starting with General purpose SSD storage:
- General Purpose SSD storage: This is a good option for a broad range of use cases which provides single-digit millisecond latencies and offers a cost-effective storage solution. The minimum SSD storage for your primary data set is 20 GiB, with a maximum of 64 TiB for MySQL, PostgreSQL, MariaDB, and Oracle, however, the maximum for SQL Server is 16 TiB. When using SSD, you are charged for the amount of storage provisioned and not for the number of I/Os processed
- Provisioned IOPS (SSD) storage: This option is great for when you have workloads that operate at a very high I/O. You can provision a minimum of 8000 IOPS, and a maximum of 80000 for MySQL, PostgreSQL, MariaDB, and Oracle, but the maximum for SQL Server is 40000. In addition to being able to provision the IOPS needed for your workload, the minimum storage for your primary data set is 100 GiB, with a maximum of 64 TiB for MySQL, PostgreSQL, MariaDB and Oracle, and 16 TiB for SQL Server. The charges for this option are based upon the amount of storage provisioned in addition to the IOPS throughput selected, again, you are charged not for the total number of I/Os processed.
- Magnetic storage: Finally magnetic storage is simply supported to provide backward compatibility, and so AWS recommends that you select General Purpose instead
The following screenshot shows the configuration screen when determining your storage requirements for MySQL. In this example Provisioned IOPS (SSD) has been selected as the storage, with a minimum of 100 GiB of primary storage, and a 1000 provisioned IOPS as throughput.
Let’s now take a look at the pricing structure for the storage.
The costs for your database storage has 2 different price points depending on whether it has been configured as a single-AZ or Multi-AZ deployment. Much like the instance pricing, the Multi-AZ is typically twice the value of single-AZ deployment.
Each of the storage types, General Purpose SSD, Provisioned IOPS (SSD), and Magnetic all come at a different price. For each type of storage used, it is priced at per GB-Month, but what is the metric of GB-Month exactly?
Essentially this defines how many GBs of storage have been provisioned and for how long. Let me give an example. Assume we are working on a 30 day month and we receive a bill for 10GB-Months of storage, this could be the result of the following scenarios:
- You have 300-GB of storage running for just 24 hours
- You have 10-GB of storage running for 720 hours
- You have 40-GB of storage running for 180 hours
These are calculated using the following formula:
Total provisioned storage / (720/number of hours running)
Where 720 = number of hours in a 30-day month
So, looking at the examples above the calculations would be as follows:
- 300 / (720/24)
- 10 / (720/720)
- 40 / (720/180)
The following screenshots show the costs for each of the storage types for the MySQL DB engine in the London region under a single-az deployment:
I now want to revisit the storage for the Aurora DB engine. As I explained earlier, Aurora uses a shared cluster storage architecture which is managed by the service itself. When configuring your Aurora database in the console, the option to configure and select storage options like we saw previously does not even exist. Your storage will scale automatically as your database grows. As a result the pricing structure for your Aurora database is priced differently.
Again, the pricing metric used is GB-Months, in addition to the actual number of I/Os processed, which are billed per million requests. The great thing about using Aurora is that you are only billed for the storage used and IOs processed, whereas with MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server you are billed for the storage provisioned regardless of if your use all of it or just a part of it.
As an example, the following shows the costs for both the used in GB-Months, plus the I/Os processed (per million requests) within the London region.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.