SAP HANA Large Instances
Start course

Migrating an SAP landscape to Azure is a complicated task with lots of moving parts. Apart from the sheer volume of data, a migration can involve software upgrades and data conversions. Not only will a landscape need to work properly once migrated, but in most cases, the migration needs to happen quickly as SAP systems are usually the information backbone of an organization.

Once the target environment structure has been finalized the next important phase is planning the migration so that system downtime is kept to a minimum. There are different migration strategies to suit multiple scenarios and this course looks at which strategy works best for each scenario. We then look at ways to optimize a migration, and its pros, cons, and issues to look out for.

Learning Objectives

  • Understand the main scenarios and considerations when migrating from SAP to Azure
  • Learn the two methods for getting your data into Azure
  • Understand the difference between homogeneous and heterogeneous migration
  • Understand how the database migration option available in SAP can be used to migrate your data to Azure
  • Learn about the SAP HANA Large Instances service from Azure

Intended Audience

  • Database administrators
  • Anyone looking to migrate data from SAP to Azure


To get the most out of this course, you should have a basic knowledge of both Microsoft Azure and SAP.


SAP HANA Large instance on Azure is a dedicated hardware infrastructure that is certified as a Tailored Datacentre Integrated solution. This means a bare metal server with associated storage components. On the one hand, this is not dissimilar from an on-premises scenario, but on the other, there are several quirks to be considered from a migration point of view in terms of validating the environment before migration. Microsoft will supply a system with either SUSE Enterprise or Red Hat Enterprise, but the OS hasn't been registered with the respective providers in both cases.

As the SAP HANA instance is not connected directly to the Internet, you will need to set up an Azure VM to act as an OS subscription manager. In the case of SUSE Linux, this will be a Subscription Management Tool server, and for Red Hat, a Subscription Manager server. These subscription servers will enable you to register and keep the operating systems correctly updated and patched.

SAP does require that you have a current support contract with your OS provider, and the person carrying out the SAP HANA installation has passed either the SAP Technology Associate exam, the SAP HANA Installation exam, or is an SAP-certified system integrator.

After validating the OS installation and setup, you will need to ensure that time synchronization is correctly applied. Large instance compute units are not time-synchronized like Azure VMs, so you will need to set up a time server that synchronizes the SAP HANA large instance and the Azure VMs that are acting as SAP application servers.

Since Azure started offering dedicated HANA large instances (HLI), the power and range of virtual machines have increased significantly, making VMs a practical and cost-effective alternative. Migrating an HLI to an Azure VM configuration with the least downtime is best done using HANA System Replication (HSR).

Using HSR to migrate to a VM with premium or ultra-disks involves three steps: Setting up the system replication, taking over the secondary system, and disabling replication. Before taking down the HANA database, much of the HANA large instance data and log snapshots can be copied to Azure storage that is accessible by the target virtual machine.

Azure NetApp enterprise file storage is a high-performance shared file-storage service that is SAP HANA certified and will let you copy files using native Linux copy utilities.

Depending on whether the source HANA large instance was deployed using Multiple Components in One System (MCOS), you may need to do a HANA tenant move after migrating with HANA system replication. In the past, MCOS was seen as a work-a-round to the Multiple Databases Container storage snapshot limitation of earlier HANA versions.

Migrating an MCOS deployed instance with HANA system replication would result in each HANA VM having its own tenant DB. The default deployment of SAP HANA 2.0 is Multiple Databases Container. A tenant move after the migration will make independent HANA databases co-tenants in a single HANA container.

While a HANA system replication migration is essentially a homogeneous lift and shift, you will still need user acceptance testing to ensure, amongst other things, the network connections to application servers are fast enough. SAP application servers must be closely located to the HANA DB to ensure minimum network latency.

There is little point in having an expensive and highly optimized in-memory DBMS connecting to distant application servers, squandering performance gains over a relatively slow network. The solution is to have the application servers and HANA DB server in the same datacenter.

Under normal circumstances, the physical placement of virtual machines cannot be guaranteed or maintained. To address this issue, Azure has proximity placement groups that ensure VMs will stay within the same datacenter. One stipulation Microsoft makes about proximity placement groups is to use them sparingly and only assign virtual machines to the PPG that absolutely have to be there. These guidelines are about resource management at the datacenter level, making sure relatively rare and high-end VM SKU requests can be met.


About the Author
Learning Paths

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.