The course is part of this learning path
Designing the Azure infrastructure for SAP workloads is a crucial and fundamental step in the migration process. SAP systems are complex and demanding, relying on high-speed and high-capacity hardware and networks.
In this course, we look at the SAP-specific Azure products and features, as well as how generic Azure services can be utilized to architect a high-performance, resilient and secure environment to host SAP workloads. Microsoft has been a provider of SAP hosting infrastructure for many years, and as such, offers a range of solutions for hosting very modest landscapes to the biggest in the cloud.
Learning Objectives
- Understand the elements of a migration project plan
- Learn about the SAP-certified compute options available on Azure
- Learn about the Azure storage options available and which ones to use for various scenarios
- Understand how to create a performant SAP network using Azure
- Learn about different SAP system deployment models in the context of infrastructure design
Intended Audience
This course is intended for anyone looking to understand the Azure infrastructure options available, certified, and suitable for various SAP deployments.
Prerequisites
To get the most out of this course, you should have a basic understanding of Azure and SAP.
https://cloudacademy.com/course/assessing-your-current-sap-landscape-1567/
Azure offers several types of persistent data storage geared towards different use cases.
- Blob storage is a general-purpose, relatively inexpensive storage for unstructured data like text or binary. The main use cases are images, documents, audio, video, and other high-volume applications like backups and archiving, where speed of access is not the priority.
- Azure files allow setting up a file share as you would on in an on-premises network using SMB (Server Message Protocol). You can then share data like configuration files between resources, such as VMs, but instead of being restricted to a particular network, files can be shared across the Internet using a URL in conjunction with a shared access signature token for security.
- Azure NetApp Files is a relatively new file storage service that can be likened to Azure files on steroids with some key differences. NetApp files support NFS and SMB so that they can be natively mounted on a Linux machine. The Max file size is 16 TB as opposed to 1TB for standard files, and I/O operations are faster with performance dependent on the selected service tier.
- Standard is 16MiB per second of throughput per 1TB of volume.
- Premium is 64MiB per second of throughput per 1TB of volume.
- Ultra is 128MiB per second of throughput per 1TB of volume.
Azure NetApp Files is initially provisioned with a minimum 4TB capacity pool, with the ability to add capacity in 1Tb increments. The maximum size for a single capacity pool is 500TB. If you need more throughput than is provided by the Ultra tier, you can over-provision the capacity to achieve the required performance. A capacity pool can then be divided into volumes that can be mounted on a VM. As of January 2021, NetApp files cross-region replication is in public preview and supports replication between certain fixed region pairs but does require you to request the feature.
- Queue storage is an Azure queue service that has the capacity to store millions of messages per queue at up to 64 kB in size.
- Table storage, which has now been rolled into the Azure Cosmos DB product, is essentially a NoSQL service for storing structured data that can be accessed from within Azure and externally via a URL.
- A managed disk or virtual hard disk is analogous to a hard drive that we would find in a physical computer. It could be called an abstracted disk, as it's a layer of abstraction over blob storage to look like a physical disk.
Virtual machines are not provisioned by default with persistent disk storage. The temp SSD column in the VM examples matrix refers to temporary volatile storage used for the Linux swap or Windows page file. When the VM is deallocated, whatever is on that SSD is gone. When deploying an SAP VM use, Azure Managed disks. By default, there are three copies of a managed disk, so redundancy is baked in. Managed disks also support VM Availability Sets and Availability Zones.
Managed disks come in 4 flavors. Ultra-SSD is the most performant disk type with the lowest latency and highest number of I/O operations per second. Premium SSD is also high performance and can be used for most SAP applications. You must use premium level disks to take advantage of Azure single virtual machine SLA. Standard SSDs are Azure's entry-level solid-state drive that is a cost-effective alternative to Ultra and Premium in non-production environments. For the most part, standard hard drives (HDD) lack the low latency and IOPS performance for use with SAP.
It is possible to share an Azure-managed disk between virtual machines, although this feature is limited to Ultra and Premium SSDs. These disks have a parameter called maxShares that specify how many VMs can share the disk. MaxShares has an upper limit that does vary by disk size.
Let's now look at the redundancy that is built into Azure storage. All Azure storage has some kind of data redundancy built-in with over 99.999999999% durability of data for any given 12-month period.
- Locally redundant storage (LRS), as the name implies, means three copies of your data are synchronously replicated within a data center. LRS protects you against hardware failure, whether that is a physical drive or a complete server meltdown.
- Zone-redundant storage (ZRS) is the next step up in resiliency synchronously replicating your data across three hard drives, each located in a different data center but within the same Azure region. Each of these matched data centers is called an availability zone. Zone redundant storage protects you against an entire data center failure, thereby increasing durability over locally redundant storage.
- Geo-redundant storage (GRS) is locally redundant storage where your data is then synchronously copied to a data center in a paired secondary region. This gives you a backup if the whole primary region goes down.
- Read-access geo-redundant storage (RA-GRS) is geo-redundant storage where you have read access to the secondary region when Microsoft declares a primary region outage.
- Geo-zone-redundant storage (GZRS) is zone redundant storage where your data is also synchronously copied to another data center in a secondary region.
- Read-access geo-zone-redundant storage (RA-GZRS) is geo-zone redundant storage where you have read access to the secondary zone data when Microsoft declares a primary region outage.
It is recommended by Microsoft to use Azure-managed disks in conjunction with virtual machines in an SAP deployment. In terms of resiliency, the main issue is that locally redundant storage is the only type of replication supported by managed disks, so for zone and regional resiliency, some other solution must be found.
Virtual machines also have a parameter called Max data disks. This is the maximum number of managed disks that you can attach to the VM. The larger or more powerful the VM specification, the more disks you can connect. For powerful machines, this should not be an issue as the Maxdisks number is up to 32 and 64. However, when dealing with smaller machines and trying to increase IOPS performance by using multiple disks, it is possible to hit the limit. Another consideration is when it comes to scaling down VM performance when CPU and memory are surplus to requirements. If the next step down in the VM size also crosses a Maxdisks threshold, leaving you with more managed disks than the VM can accommodate, you'll end up in a tricky situation, having to move and rearrange your data to be able to scale down. Max IOPS is another VM limit that could be reached when using multiple disks. For example, an M32ts can accommodate 32 disks. If you were to use P30 premium disks because they represent good value for money, that's 32 x 5000 IOPS per disk equals 160,000 IOPS, which is well in excess of the VM's 40,000 max. 8TB is the most storage you can have with a M32ts using P30s without exceeding max IOPS. In contrast, 1 P60 8TB disk is provisioned for 16,000 IOPS, well short of 40,000 provided by eight P30s. Cost is another way to look at it. Sure, 8TB is cheaper with a P60 at $860 versus $983 with eight P30s, but you're getting two and half more IOPS for only 14% more cost with P30s. The point here is that VM selection significantly impacts how you configure hard drive storage.
Storage requirements for a VM running SAP HANA are pretty simple. You can use Premium or Ultra SSDs for log, data, and shared, with the caveat that write acceleration must be enabled when using Premium for /hana/log. You can also use NetApp File volumes with network file system (NFS) version 4.1 or greater for log, data, and shared, although only NFS version 3 is required for /hana/shared. You cannot mix and match Azure-managed disks with NetApp volumes, so you can't have /hana/data on a NetApp volume and /hana/log on an Ultra disk. For recommended VM hosting HANA disk configurations, see the Microsoft documentation hana-vm-operations-storage.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.