1. Home
  2. Training Library
  3. Compute (PAS-C01)

Scaling EC2 Instances for SAP HANA

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW3m 1s
AWS Compute Fundamentals
2
What is Compute?
PREVIEW1m 49s
3
Amazon EC2
PREVIEW28m 26s
EC2 Auto Scaling
ELB & Auto Scaling Summary
14
Summary
7m 37s
Start course
Overview
Difficulty
Beginner
Duration
2h 40m
Students
27
Description

In this section of the AWS Certified: SAP on AWS Specialty learning path, we introduce you to the various Compute services currently available in AWS that are relevant to the PAS-C01 exam.

Learning Objectives

  • Identify the various Compute services available in AWS
  • Define the different families within AWS Compute services
  • Identify the purpose of load balancers and Elastic Load Balancing
  • Understand how Auto Scaling can enable your Compute resources to scale elastically based on varying levels of demand
  • Identify supported EC2 instance types for SAP on AWS

Prerequisites

The AWS Certified: SAP on AWS Specialty certification has been designed for anyone who has experience managing and operating SAP workloads. Ideally you’ll also have some exposure to the design and implementation of SAP workloads on AWS, including migrating these workloads from on-premises environments. Many exam questions will require a solutions architect level of knowledge for many AWS services, including AWS Compute services. All of the AWS Cloud concepts introduced in this course will be explained and reinforced from the ground up.

Transcript

Scaling EC2 instances for SAP HANA. One of the best parts about working in the cloud is that you get to right-size your instances for today. You don't have to worry about the on premise issue of building for three years from now or five years from now. You just have to pick the instance type that works for today's workload and you can upgrade it later down the road. With that in mind, let's take a look at what the scaling options are and what instances will work for many different memory requirements. There are two ways we can scale-out our operations. We can either scale-up by using bigger instances or we can scale-out by adding more nodes. 

Let's look at scaling up first. When taking a look at all the available certified SAP instances, we can track their memory in terabytes from lowest to highest in the following order: The r instance family works its way up from 0.256 terabytes of memory all the way up to 0.768 terabytes of memory. That's a pretty good chunk of space to work with honestly, but we can go much farther. The x1 instance family ups the ball game a bit and ranges from almost one terabyte all the way up to almost four terabytes of memory. And finally, we can visit the U instance family which starts out at just over six terabytes of memory and can reach the massive 24 terabyte limit.

So, if you're just starting off, maybe go with the R types and we can always scale-up into another category when needed. Another option we can look at is our ability to scale outwards. Sometimes it's not feasible to run a single node system where we can move to another instance type or maybe we've already maxed that out. When looking at our multi-node options for systems running OLAP or OLTP, we have some very good scalability. 

When we look at OLAP workloads, we have the r5 family which can scale outwards 16x, which gives us 12 terabytes of memory when using the 0.768 terabyte instance. The x1 family can scale-out 25 times, each node giving us two terabytes, for a grand total of 50 terabytes of combined memory. Following this up, we have the U type instances which can scale-out 16 times at six terabytes for a grand total of 96 terabytes of memory. 

Finally, we have the X1e instances which can reach 25 nodes at four terabytes each for a final 100 terabytes of memory. Our OLTP options are a little limited with only two options, the U6 instance can scale-out four nodes with six terabytes of memory each for a total of 24 terabytes of combined memory. And finally, the U12 instances can scale-out four nodes at 12 terabytes each giving a grand total of 48 terabytes of memory.

 

About the Author
Students
199248
Labs
1
Courses
192
Learning Paths
132

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.