Storage Configuration


Course Introduction
Deploy and Migrate an SAP Landscape to Azure
13m 12s
Course Summary
3m 17s
Start course
1h 8m

After planning and researching the migration of an SAP landscape to Azure, words must become action. In Deploy and Migrate as SAP Landscape to Azure, we look at how crucial infrastructure components can be deployed and configured in preparation for migrating servers and data from "on-premises" to the Azure cloud.

This course looks at deployment and migration options and tools and services available within the Azure and broader Microsoft ecosystem that will save you time and effort. We touch on SAP-specific issues you need to be aware of and general best practices for Azure resources deployment. The Deployment and Migration course builds on Designing a Migration Strategy for SAP and Designing an Azure Infrastructure for SAP courses.

Learning Objectives

  • Understand the methods for deploying VMs and prerequisites for hosting SAP
  • Learn about ExpressRoute, Azure Load Balancer, and Accelerated networking
  • Understand how to deploy Azure resources
  • Learn about Desired State Configuration and policy compliance
  • Learn about general database and version-specific storage configuration in Azure
  • Learn about the SQL Server Migration Assistant and Azure Migration tools

Intended Audience

This course is intended for anyone looking to migrate their SAP infrastructure to Azure.


Before taking this course, we recommend you take our Designing a Migration Strategy for SAP and Designing an Azure Infrastructure for SAP courses first.


The most basic disk or storage configuration recommended for a production system is to have three separate disks for the OS and executables. That's one disk for the operating system, one for the database management system binaries, and another disk for SAP binaries. This separation's primary reason is to limit the impact of log writes by SAP and DBMS executable on OS performance. Virtual machine and disk and storage SKUs all have different performance parameters in terms of volume, throughput, and latency, not to mention cost. It is a case of striking the right balance between the number and type of disks you can connect to a VM to maximize performance across these metrics. This is not a straightforward task as you are dealing with many variables. In a standard database scenario, you can split DB files across multiple disks to improve performance. In a virtual environment adding disks is relatively trivial and will not only increase retrieval speed, but the maximum throughput will increase - at some cost. This situation also applies to NetApp file shares. The question then becomes, does the VM have the IOPS capacity to take advantage of the storage performance, or is it even able to support the number of attached disks. Cost becomes more of an issue when the goal is very low latency using Azure Ultra disks, but you also require large volume and throughput.

Suppose you want to implement RAID or disk stripping, which you would only need to do for performance and not resiliency, as Azure automatically replicates disks in a data center. In that case, it is recommended to use Windows Storage Spaces in conjunction with Windows Server 2012 R2 or later. On a Linux VM, you can implement software RAID using MDADM and Logical Volume Manager.

SQL Server 2008 R2 is the minimum supported release for SAP, but it is recommended to deploy versions 2016 or 2017 as they provide much better integration with Azure services. Tempdb should be located on the non-persistent D:\ drive or another separate disk. Except for A series VMs, the D:\ drive has higher throughput and lower latency than the system drive. Be aware that placing the tempdb inside a folder structure on the D:\ drive may become problematic after a reboot, and the drive is reprovisioned, as the folder structure may no longer exist. In this case, you will need to check for and re-create the folder structure before the SQL Server service starts. Disks hosting data and log files should be formatted as NTFS in 64kb blocks. Microsoft and SAP recommended enabling database page compression if it isn't already and doing that before migrating the databases to Azure. Compressing before migrating has direct benefits. It decreases migration time and saves on storage costs. If compression isn't already enabled, doing it, on-premises should be faster, but that is very dependent on relative hardware specs. 

Unless you deploy your own VHD image with SQL Server, the default installation will have the wrong collation. To fix this open a command prompt in C:\Program Files\Microsoft SQL Server\< the appropriate version number>\Setup Bootstrap\SQLServer<version> and execute Setup.exe with the options of  /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=MSSQLSERVER /SQLSYSADMINACCOUNTS=<local_admin_account_name> /SQLCOLLATION=SQL_Latin1_General_Cp850_BIN2, where local_admin_account_name is the administrator account used when deploying the virtual machine. Run the system stored procedure sp_helpsort to verify the collation change.

If you choose to enable SQL Server transparent data encryption, which is fully supported by SAP, you can use Azure Key Vault to store the encryption keys. SQL Server has key vault connection functionality to enable key vault storage of TDE certificates.

Here we have approximate virtual machine sizes for running an SQL Server database as a backend for SAP Business One. As you can see, it's pretty much a geometric progression in terms of CPU and RAM in relation to user numbers.

As with an SQL Server setup, it is preferable to place the operating system, DBMS, and system databases on separate disks for an Oracle DB deployment. In the case of smaller VMs with lower attached disk capacity, you can place Oracle home, stage, saptrac, saparch, sapbackup, sapcheck, or sapreorg onto the OS disk, making sure the OS disk is at least 127GB in size. Data and redo log files need to be on separate disks, while tempfiles can go on the non-persistent D: drive.

Here we have the minimum recommended disk configuration for Oracle on a Windows Server. I/O operations per second should drive storage and VM selection. If you can meet your throughput requirements with one disk, then that's fine. 

When running Oracle on a Linux VM, the same advice applies with regards to not installing Oracle related files on the boot disk. However, a small VM size means fewer attachable disks, then files with low I/O requirements can go on the boot disk. In this case, you'll likely have to increase the OS disk size from the default of 30GB so you can add a partition to store the Oracle binaries. A minimum version of Oracle Linux Unbreakable Enterprise Kernel 4 is needed to support Azure Premium SSDs. To use Azure NetApp Files, you must run Oracle Linux 8.2 or greater and Oracle DB 19c or greater.

About the Author
Learning Paths

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.