1. Home
  2. Training Library
  3. Storage (PAS-C01)

Block Storage for SAP on AWS

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW2m 51s
AWS Storage
2
Amazon EC2
12
Amazon Elastic Block Store (EBS)
Introduction to Amazon EFS
Amazon FSx
20
Optimizing Storage
22
AWS Backup
PREVIEW3m 50s
Running Operations with the Snow Family

Instructor: Danny Jessee

Block Storage for SAP on AWS

Hello, and welcome to this lecture, where I will be discussing block storage using Amazon EBS–Elastic Block Storage–for SAP workloads on AWS. By the end of this lecture, you’ll understand how to use Amazon EBS with the EC2 instances that are part of your SAP deployments on AWS, along with the different types of EBS volumes that you can provision and when you should use them.

Amazon EBS provides raw, unformatted block storage volumes, which you can think of as being like traditional disk drives. EBS volumes can range in size from 1 GB all the way up to 16 TB. You can create file systems on top of EBS volumes, and multiple EBS volumes may be attached to a single EC2 instance; however, an EBS volume may only be attached to a single EC2 instance at a time.

alt
Image source

EBS volumes may also be combined in any standard RAID configuration for additional redundancy, as long as the underlying operating system of the EC2 instance supports it. Now it’s worth pointing out that all EBS volumes are already replicated within a single availability zone to protect against the failure of a single underlying disk, giving you five nines, or 99.999% availability, per EBS volume. So again, you’ll be using EBS volumes as storage volumes for the EC2 instances that host your SAP applications and databases, where they can be used to store everything from the data residing in your SAP databases, to transaction and application log files, or even local backups. Although we’ll see shortly that another service, Amazon S3, is better suited for long-term backup storage. Now unlike EC2 Instance Storage which is ephemeral, data stored on EBS volumes persists independently from the lifecycle of the EC2 instance to which it is attached. EBS volumes can also be backed up using snapshots, which are incremental, point-in-time backups that are stored in Amazon S3. You can even use EBS snapshots to create replicas of existing SAP systems for development or testing. And EBS volumes as well as their snapshots may also be encrypted for additional security, with no additional cost or performance impacts.

Now when creating a new EBS volume, one of the first choices you’ll need to make is which volume type to use. So broadly speaking, EBS volume types fall into one of two categories: solid state drives, or SSDs, or magnetic hard disk drives, or HDDs. And these are no different than the SSDs or magnetic disks you might use in your on-premises servers, with the same reliability, cost, and performance tradeoff considerations.

SSD-backed volume types

So SSDs are flash storage with no moving parts and because of this, they perform much faster and are much more reliable than traditional magnetic storage. This makes them ideally suited for SAP transactional workloads and instance boot volumes, where read and write speed is most important. Valid use cases for SSDs include file systems for SAP applications, SAP database log and data files, and even SAP local backups. Now we measure the performance of our volumes using units referred to as input/output operations per second, or IOPS. And in AWS, your choices for solid-state storage are the general purpose, or gp2 and gp3 volume types, or the provisioned IOPS, or io1 and io2 volume types.

So both general purpose and provisioned IOPS SSD storage offer single-digit millisecond latency. And again, these are your gp2 and gp3 volume types. But while gp2 and gp3 volumes can offer up to 16,000 IOPS per volume, both the io1 and io2 volume types allow you to scale up to 64,000 IOPS, which may be necessary if your SAP database has sustained I/O-intensive throughput requirements. But as you might expect, this higher performance will come with a higher cost.

 

General Purpose (gp2/gp3)

Provisioned IOPS (io1/io2)

Maximum IOPS

16,000

64,000

Throughput

250 MB/s (gp2)
1,000 MB/s (gp3)

1,000 MB/s

Latency

Single-digit ms

Single-digit ms

Cost

$$$

$$$$$

 

Determining IOPS and throughput requirements for SAP applications

So how can you determine which volume type is best suited for your SAP workloads? Well if you’ve calculated your workload’s SAP Application Performance Standard, or SAPS benchmark using something like the SAP Quick Sizer tool, you can estimate the total number of IOPS you’ll need by assuming a value of 60-90% of the Database SAPS for your workload. Now for your OLTP workloads, this factor will be closer to 60% of your Database SAPS and for your OLAP workloads, it may be closer to 90%. So for instance, if you’re running an OLTP workload such as SAP Business Suite with a total of 10,000 Database SAPs, you can expect to need around 6,000 IOPS for your database storage volume, which can easily be achieved with a gp2 storage volume. But if you’re going to need more than 16,000 IOPS, you’ll need to use a Provisioned IOPS volume instead.

So generally speaking, you’re going to always want to start by using a General Purpose gp2 volume and seeing if that is capable of meeting your performance requirements. Gp2 volumes are typically sufficient for most SAP HANA workloads. But if they aren’t, then you can quickly and easily change to using a Provisioned IOPS volume instead. So for instance, you could start with a gp2 volume and upgrade it to an io1 or io2 volume later on if the gp2 volume doesn’t meet your performance requirements. Or you could use a gp3 volume instead, which allows you to provision your IOPS and throughput independently of the storage capacity of your volume. And this could be useful if you have highly transactional SAP workloads that don’t require a lot of additional storage.

HDD-backed volume types

So we’ve seen how SSD-backed EBS volume types are useful for boot volumes as well as critical applications and transactional databases that have low latency and high throughput requirements. But for streaming workloads, or for large volumes of data that are not frequently accessed, HDD-backed volumes are a much more cost-effective alternative. And again, HDDs are your typical magnetic storage disks that have moving mechanical parts.

Now for streaming workloads, a Throughput Optimized st1 volume offers a maximum throughput of up to 500 MB per second at roughly half the cost of a similarly-sized gp2 or gp3 volume. And finally, your lowest cost storage option is the Cold HD, or sc1 storage volume. And just like our other volumes, these sc1 volumes can be up to 16 TB in size and still offer a maximum throughput of up to 250 MB per second, but at about one-third the cost of an equivalent st1 volume. So while an st1 volume might be useful for streaming workloads like log processing, an sc1 volume is better suited for things like long-term data archives, or any other data that is large, but not frequently accessed.

 

Throughput Optimized (st1)

Cold HDD (sc1)

Maximum IOPS

500

250

Throughput

500 MB/s

250 MB/s

Cost

$$

$

 

Mixing and matching volume types

Now keep in mind that you’re free to mix these different volume types within your overall SAP solution architecture. So it’s entirely valid to have, for instance, a gp2 boot volume and a separate io1 storage volume attached to the same EC2 instance for your SAP database in a way that allows you to effectively balance both cost and performance considerations. You could even leverage the ephemeral EC2 instance store for things like temporary cache files that require the absolute fastest possible read and write speeds, but do not need to be persisted if the associated EC2 instance is terminated. And as I previously mentioned, it is possible to transition between different volume types over time as requirements change. You can also leverage Amazon CloudWatch, where EBS sends data points regarding burst balance, queue length, and read and write IOPS, to monitor the performance of your EBS volumes over time. For more information on infrastructure monitoring using Amazon CloudWatch, check out this course:

Operating and Monitoring SAP Workloads on AWS
https://cloudacademy.com/course/operating-and-monitoring-sap-workloads-2508/
 

 

Difficulty
Beginner
Duration
2h 44m
Students
50
Ratings
5/5
starstarstarstarstar
Description

In this section of the AWS Certified: SAP on AWS Specialty learning path, we introduce you to the various Storage services currently available in AWS that are relevant to the PAS-C01 exam.

Learning Objectives

  • Identify and describe the various Storage services available in AWS
  • Understand how AWS Storage services can assist with large-scale data storage, migration, and transfer both into and out of AWS
  • Describe hybrid cloud storage services and on-premises data backup solutions using AWS Storage services
  • Identify storage options for SAP workloads on AWS

Prerequisites

The AWS Certified: SAP on AWS Specialty certification has been designed for anyone who has experience managing and operating SAP workloads. Ideally you’ll also have some exposure to the design and implementation of SAP workloads on AWS, including migrating these workloads from on-premises environments. Many exam questions will require a solutions architect level of knowledge for many AWS services, including AWS Storage services. All of the AWS Cloud concepts introduced in this course will be explained and reinforced from the ground up.

About the Author
Students
207898
Labs
1
Courses
211
Learning Paths
163

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.