Scaling EC2 Instances for SAP HANA

Contents

keyboard_tab
Start course
Overview
Difficulty
Intermediate
Duration
12m
Students
26
Description

This course discusses EC2 instances for SAP workloads on AWS.

Learning Objectives

Our learning objectives for this course are to:

  • Get an understanding of what EC2 instances are recommended for SAP HANA workloads
  • Learn the ways we can scale out these instances
  • And finally talk about how to migrate to high memory instances

Intended Audience

I would recommend this course for anyone attempting to create an SAP based architecture.

Prerequisites

You should have a decent understanding of cloud computing and cloud architectures, specifically with Amazon Web Services. You should also have some background knowledge related to SAP HANA.

Transcript

Scaling EC2 instances for SAP HANA. One of the best parts about working in the cloud is that you get to right-size your instances for today. You don't have to worry about the on premise issue of building for three years from now or five years from now. You just have to pick the instance type that works for today's workload and you can upgrade it later down the road. With that in mind, let's take a look at what the scaling options are and what instances will work for many different memory requirements. There are two ways we can scale-out our operations. We can either scale-up by using bigger instances or we can scale-out by adding more nodes. 

Let's look at scaling up first. When taking a look at all the available certified SAP instances, we can track their memory in terabytes from lowest to highest in the following order: The r instance family works its way up from 0.256 terabytes of memory all the way up to 0.768 terabytes of memory. That's a pretty good chunk of space to work with honestly, but we can go much farther. The x1 instance family ups the ball game a bit and ranges from almost one terabyte all the way up to almost four terabytes of memory. And finally, we can visit the U instance family which starts out at just over six terabytes of memory and can reach the massive 24 terabyte limit.

So, if you're just starting off, maybe go with the R types and we can always scale-up into another category when needed. Another option we can look at is our ability to scale outwards. Sometimes it's not feasible to run a single node system where we can move to another instance type or maybe we've already maxed that out. When looking at our multi-node options for systems running OLAP or OLTP, we have some very good scalability. 

When we look at OLAP workloads, we have the r5 family which can scale outwards 16x, which gives us 12 terabytes of memory when using the 0.768 terabyte instance. The x1 family can scale-out 25 times, each node giving us two terabytes, for a grand total of 50 terabytes of combined memory. Following this up, we have the U type instances which can scale-out 16 times at six terabytes for a grand total of 96 terabytes of memory. 

Finally, we have the X1e instances which can reach 25 nodes at four terabytes each for a final 100 terabytes of memory. Our OLTP options are a little limited with only two options, the U6 instance can scale-out four nodes with six terabytes of memory each for a total of 24 terabytes of combined memory. And finally, the U12 instances can scale-out four nodes at 12 terabytes each giving a grand total of 48 terabytes of memory.

 

About the Author
Avatar
Will Meadows
Senior Content Developer
Students
17845
Courses
39

William Meadows is a passionately curious human currently living in the Bay Area in California. His career has included working with lasers, teaching teenagers how to code, and creating classes about cloud technology that are taught all over the world. His dedication to completing goals and helping others is what brings meaning to his life. In his free time, he enjoys reading Reddit, playing video games, and writing books.