The course is part of this learning path
Designing the Azure infrastructure for SAP workloads is a crucial and fundamental step in the migration process. SAP systems are complex and demanding, relying on high-speed and high-capacity hardware and networks.
In this course, we look at the SAP-specific Azure products and features, as well as how generic Azure services can be utilized to architect a high-performance, resilient and secure environment to host SAP workloads. Microsoft has been a provider of SAP hosting infrastructure for many years, and as such, offers a range of solutions for hosting very modest landscapes to the biggest in the cloud.
- Understand the elements of a migration project plan
- Learn about the SAP-certified compute options available on Azure
- Learn about the Azure storage options available and which ones to use for various scenarios
- Understand how to create a performant SAP network using Azure
- Learn about different SAP system deployment models in the context of infrastructure design
This course is intended for anyone looking to understand the Azure infrastructure options available, certified, and suitable for various SAP deployments.
To get the most out of this course, you should have a basic understanding of Azure and SAP.
One good aspect of SAPs hardware certification requirement is that it makes selecting the correct compute resource that little bit easier. As Microsoft has been hosting SAP systems for some time, there is an extensive selection of certified virtual machines to choose from. While the advent of SAP HANA, the highly optimized in-memory DBMS, has emphasized very powerful "bare metal" dedicated servers, this is really just an edge case utilized by a minimal number of customers. As Azure evolves and matures, more and more powerful VM's become available, enabling some customers to move their database from physical servers to virtual ones.
Before choosing application and database server SKUs, you will have sized your current environment so you can select the appropriate machines. Let's start with virtual machines as they will cater to most scenarios.
A list of SAP-certified virtual machines can be found on both the Microsoft and SAP sites. Most of the VMs certified for HANA-related applications are from the E and M series, with one of each from the D and G series. The M series and E64s_v3 are specifically designed for use with SAP HANA, while other E series are suitable as application servers, as can be seen by the use case description from Microsoft.
From this VM specification matrix, we can see that there is more than just CPUs and RAM at stake. The maximum number of supported disks, IOPS, network interfaces, and possible network bandwidth plays an essential role in overall system performance. 12Tb is currently the most memory available with an Azure virtual machine. The same VM offers up to 416 virtual CPUs.
NetWeaver-based systems running an AnyDB are less demanding. Far more VM options, right down to very cost-effective entry-level A series machines are available. The lower-level machines are suitable as development servers, and unless you have a very small SAP installation, correctly sizing your existing landscape should preclude these machines from production.
To maximize your cloud investment, you need to take advantage of all that VMs have to offer by minimizing VM usage. That is when machines aren't in use in non-production environments, put them to sleep. VM snoozing could also be applied to some production application servers if your organization has erratic workloads, like high demand peaks on particular weekdays or specific times of the month. You will also want to investigate Azure Reserved VM Instances as another cost-saving measure. A VM can be reserved for 1 or 3-year terms, with Microsoft indicating up to 72% saving over Pay-as-you-go.
More than most, SAP systems rely on good networking with low latency. One way to improve latency is to have virtual machines physically close to each other – in proximity. Under normal circumstances, the deployment of VMs with regard to precise location is mostly irrelevant and outside your control beyond the regional level. A proximity placement group allows you to specify a number of virtual machines that should be deployed and maintained within a data center, preserving a close physical relationship, thereby reducing network latency.
Suppose your sizing exercise indicates that a virtual machine will not be powerful enough to meet your HANA requirements. In that case, you can request an S series physical server or a bare-metal machine, referred to as a HANA large instance (HLI) in the context of SAP. A HANA large instance is a physical server dedicated to one customer that resides within an Azure data center. As of February 2021, S896M is the largest HLI SKU, boasting 448 CPU cores and 896 threads with 24Tb of RAM. All Type I HLIs run either SUSE Linux Enterprise Server for SAP Applications or Red Hat Enterprise Linux for SAP HANA, whereas Type II units only run SUSE Linux at this time.
The server is within an infrastructure stamp that includes storage and network components. Each large instance belongs to a tenant and prevents communication between tenants at the infrastructure stamp level using a virtual network, even when a customer has multiple tenants. The HLI accesses storage via dedicated virtual machines which have storage volumes assigned to them. A storage volume can only be assigned to one VM, and that VM is assigned to a single tenant.
To be clear, "your" server isn't a box sitting in the corner of a data centER with a post-it note on top with the words "Bob's server" – assuming your name is Bob. An HLI infrastructure stamp can host multiple tenants, and those tenants are isolated using the aforementioned network and storage mechanisms.
To go down the HLI path, you will need at least a Microsoft Premier support contract, and if the HLI has 384 or more CPU cores, you will also have to sign up for Azure Rapid Response. Azure Rapid Response is a bit like a personalized concierge service where you are assigned a support team familiar with your environment. They can respond faster, 15 minutes or less, for critical issues and generally can provide better and more timely support.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.