Introduction and Overview
SAP to Azure Migration
The course is part of this learning path
Migrating an SAP landscape to Azure is a complicated task with lots of moving parts. Apart from the sheer volume of data, a migration can involve software upgrades and data conversions. Not only will a landscape need to work properly once migrated, but in most cases, the migration needs to happen quickly as SAP systems are usually the information backbone of an organization.
Once the target environment structure has been finalized the next important phase is planning the migration so that system downtime is kept to a minimum. There are different migration strategies to suit multiple scenarios and this course looks at which strategy works best for each scenario. We then look at ways to optimize a migration, and its pros, cons, and issues to look out for.
- Understand the main scenarios and considerations when migrating from SAP to Azure
- Learn the two methods for getting your data into Azure
- Understand the difference between homogeneous and heterogeneous migration
- Understand how the database migration option available in SAP can be used to migrate your data to Azure
- Learn about the SAP HANA Large Instances service from Azure
- Database administrators
- Anyone looking to migrate data from SAP to Azure
To get the most out of this course, you should have a basic knowledge of both Microsoft Azure and SAP.
Within the SAP Software Update Manager, there is the database migration option. DMO automates much of what we've already discussed with a classical migration. While it is technically possible to use DMO to migrate to the cloud, it isn't officially supported by SAP. However, DMO with system move is designed for migrating to the cloud. DMO with system move takes much of the downtime manual complexity out of the migration. DMO has a benchmark tool that will estimate migration times to aid with setting bandwidth size and downtime expectations.
In one step, it can perform a system upgrade, Unicode conversion if required, and a database to the cloud migration. As you would expect with an automated process, it is easier to implement; you don't need the same SAP certification as with a classical migration, but it is slightly less flexible, and there is less opportunity for performance tuning. DMO with system move only supports SAP HANA and SAP ASE as target databases, although other DBMS are available by request. It can't be used when the source DBMS is SAP HANA. Another DMO limitation is that it is designed to work with the ABAP stack. If your system runs Java components in a dual-stack configuration, then stacks have to be split before migration.
When you start a DMO with system move migration, the software update manager running on-premises begins by checking and preparing the source system. Once the source system is been prepared and is ready the shadow repository is created. The migration starts in earnest when data is exported to files on the source system, and these files are transferred via ExpressRoute or VPN to Azure. The exported files can be transferred to Azure either sequentially or in parallel. Sequential mode means that all files are exported to the SUM directory before the transfer to Azure occurs. Once all files have been uploaded to the cloud, then the import process begins.
As the name implies, a parallel data transfer mode starts transferring files to the target environment and begins the import process before the export process has finished. While there is limited opportunity to manually optimize a DMO migration, aside from determining the optimal number of R3Load processes, the software does have a form of AI that will optimize table splitting based on previous migration runs. It does this by detecting when the number of running R3Load processes falls below 90% of those configured to run. This is assumed to be an indication that no more tables are being extracted. The difference between this 90% point and the finish of the import process is called the tail. The goal is to have the shortest tail possible. The software update manager will use data from past migration runs to split tables, mix and match the table splits with smaller tables into packages, and then order the package execution to allow the maximum number of R3Load processes to be active simultaneously to minimize the tail and downtime.
It is clear with both the classical and DMO migration methods that the shortest downtime is achieved by a combination of the number of active parallel R3Load processes and the table makeup of the packages those processes execute. As the possible combinations of these factors are enormous, it is exceptionally unlikely that you will strike the optimum on the first run. In either scenario, that will take multiple repeat runs of the downtime phase for either you, using a classical migration, or for the DMO software to zero in on an optimal solution.
Near-zero downtime migration with DMO is basically a DMO migration where specified large tables are migrated in advance of the downtime or cutover period, but with replication to the target system set up. In addition to the normal migration process, any changes to those large tables are replicated. This is yet another variation on the divide and conquer strategy where only a small subset of the largest tables is migrated during the downtime.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.