image
Transferring the Data
Start course
Difficulty
Intermediate
Duration
39m
Students
335
Ratings
5/5
starstarstarstarstar
Description

Migrating an SAP landscape to Azure is a complicated task with lots of moving parts. Apart from the sheer volume of data, a migration can involve software upgrades and data conversions. Not only will a landscape need to work properly once migrated, but in most cases, the migration needs to happen quickly as SAP systems are usually the information backbone of an organization.

Once the target environment structure has been finalized the next important phase is planning the migration so that system downtime is kept to a minimum. There are different migration strategies to suit multiple scenarios and this course looks at which strategy works best for each scenario. We then look at ways to optimize a migration, and its pros, cons, and issues to look out for.

Learning Objectives

  • Understand the main scenarios and considerations when migrating from SAP to Azure
  • Learn the two methods for getting your data into Azure
  • Understand the difference between homogeneous and heterogeneous migration
  • Understand how the database migration option available in SAP can be used to migrate your data to Azure
  • Learn about the SAP HANA Large Instances service from Azure

Intended Audience

  • Database administrators
  • Anyone looking to migrate data from SAP to Azure

Prerequisites

To get the most out of this course, you should have a basic knowledge of both Microsoft Azure and SAP.

Transcript

There are two methods for getting your data into the Azure cloud. First and foremost, we are all familiar with copying or uploading files over a network or the Internet. Alternatively, you can go old school and send a physical device with the data on to Microsoft and upload it directly to your environment.

Before the proliferation of ultrafast broadband, transferring large files over the Internet was a slow and risky proposition. There was a time when buying software online often meant having a DVD delivered to your letterbox. As I mentioned earlier, Microsoft does provide a service called Azure data box. The data box service will send you a physical hard drive to upload your files onto and then send back to Microsoft, who will then upload the files onto your virtual machines. This is a good solution for very large amounts of data and is a well-used service, but for obvious reasons, not the most timely of methods. Transfer time can be measured in days or weeks. It's not uncommon that your data will take up to 2 weeks to become available for use, but somewhere between 1 to 2 weeks is typical. While Azure data box will not be practical for most production migrations, it is an option for testing a migration using production amounts of data.

Two key factors determine the efficiency of transferring large files over a network or the Internet, and they are bandwidth and reliability. The larger the bandwidth or, the bigger the pipe, the faster file can be transferred. Having said that, it is often more efficient, depending on the file copy software, to transfer several smaller files simultaneously rather than one large file. This is to do with the multithreaded nature of the file copy software, where each thread isn't using 100% of the bandwidth.

As well as multithreaded copy utilities, you will also want to use file transfer software that will pick up where it left off in the case of a network failure. There is nothing quite as frustrating as getting to the end of a large file copy only to have the connection go down and then having to start again. Azure AzCopy is a tool specifically designed to upload files to Azure blob or file storage that allows you to restart a file transfer from the last stopped position. SAP copy utilities support the FTP protocol if you prefer a more traditional file transfer method for a heterogeneous migration.

It's good to have an idea of timeframes when doing a large data transfer over the network. While several websites provide tools for calculating how long a transfer will take, I have here an outline of the calculation so you can work it out yourself. It's straightforward; we just divide the amount of data by the transfer rate, that is, how quickly the data moves across the wires. It's just a case of converting the amount of data into the number of bits as transfer rates are stipulated in megabits.

Mbps works out at just over 1 million bits per second. Converting one terabyte or one thousand gigabytes means multiplying by 1024 MB, then 1024 kB, 1024 bytes, and finally by eight bits equals a huge number. We divide that number by a transfer rate of 300 Mbps, which gives us 27306 seconds, which translates to around 7 minutes and 40 seconds. This is probably the most optimistic figure for these numbers as it doesn't take into account any packet failures or re-tries. With a 1 Gb pipe, this will still take well over two hours to do the file copy, and if the amount to transfer is 5 TB, then realistically, you are looking at close to 12 hours. What, if anything, can be done to speed up the data transfer?

The most obvious solution is to increase bandwidth by setting up one or more site-to-site VPN connections to the data center temporarily. With the caveat that your on-premises location does have the bandwidth to cope with extra VPN connections. You can also use additional site-to-site VPN connections to supplement an ExpressRoute connection. It is easier to add VPN connections rather than increase capacity on an ExpressRoute link in most situations. This is mainly to do with the dependency on a third party, your ExpressRoute provider, and all the admin and contractual obligations that are involved in such arrangements. However, depending on how long you need the extra capacity for, that is, weeks or months in the case of testing and trial runs, rather than hours or days for the actual migration, then it may be worth changing your ExpressRoute capabilities. We've been talking about copying files to Azure blob storage, but remember that is not the final destination. The data still needs to be restored or transferred to the virtual machines. Just bear that in mind when working out your migration timings.

Before we move on from data transfer, just a quick note about different environments such as dev and test, whether you choose to migrate these environments as well or recreate them from production is a case of organizational preference.

Some businesses regularly refresh QA or test environments from production anyway, so migrating that environment is unnecessary as it can be re-established once production is in place. Development environments are typically different from production and tend to be much smaller in terms of data. Hence, migration is necessary but at the same time, not so much of a chore.

You can use your smaller development data for initial migration testing, but if it is a lot smaller than production, be aware that there may be issues that won't appear until you migrate at scale. Testing a migration with a 100 GB dataset means you can fail fast and fix problems quickly, but if your production data is 5 TB, then there may be some issues that testing with the smaller dataset won't uncover.

 

About the Author
Students
19315
Courses
65
Learning Paths
12

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.