1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Design an Azure Infrastructure for SAP Workloads

System Deployment

Contents

keyboard_tab
Azure Infrastructure for SAP Workloads
1
Introduction
PREVIEW2m 44s
2
Project Plan
PREVIEW6m 51s
4
Storage
11m 3s
5
Networking
16m 17s
7
Summary
3m 8s
Start course
Overview
Difficulty
Intermediate
Duration
58m
Students
85
Ratings
5/5
starstarstarstarstar
Description

Designing the Azure infrastructure for SAP workloads is a crucial and fundamental step in the migration process. SAP systems are complex and demanding, relying on high-speed and high-capacity hardware and networks.

In this course, we look at the SAP-specific Azure products and features, as well as how generic Azure services can be utilized to architect a high-performance, resilient and secure environment to host SAP workloads. Microsoft has been a provider of SAP hosting infrastructure for many years, and as such, offers a range of solutions for hosting very modest landscapes to the biggest in the cloud.

Learning Objectives

  • Understand the elements of a migration project plan
  • Learn about the SAP-certified compute options available on Azure
  • Learn about the Azure storage options available and which ones to use for various scenarios
  • Understand how to create a performant SAP network using Azure
  • Learn about different SAP system deployment models in the context of infrastructure design

Intended Audience

This course is intended for anyone looking to understand the Azure infrastructure options available, certified, and suitable for various SAP deployments.

Prerequisites

To get the most out of this course, you should have a basic understanding of Azure and SAP.

https://cloudacademy.com/course/assessing-your-current-sap-landscape-1567/

 

Transcript

System deployment is no different in the cloud than on-prem. You can deploy the whole system, database, central services, and dialog instances on the same virtual machine as a standalone installation. You can go with a more traditional distributed structure where each component is deployed to its own VM. Or a highly available solution which is essentially a distributed installation with replicated redundancy spread over multiple geographic regions. 

On the face of it, each of these deployments can vary widely in terms of performance, cost, and manageability.

Deploying an entire system to a single VM is the most effortless installation, but database and application components will conflict over access to the machine's resources, like memory. Scaling a VM with a complete system installed will be problematic and could lead to excessive downtime. There is also the issue of having multiple points of failure on one box. Any one of the SAP components having a bad hair day could have more of an impact on overall system performance than if they were on separate machines. Apart from being not highly available, this setup is limited to a single VM's SLA. Standalone is not recommended for large or production systems. However, it would be an interesting exercise to see how a powerful VM with a highly optimized hard drive architecture would perform against a more modestly spec'd distributed system. My intuition says the cost wouldn't be justified.

A distributed deployment where each component is deployed to a separate VM is typical of most installations. Having said that, components that use relatively minimal resources can share a VM. While the installation is a bit more involved due to provisioning more virtual machines, the real complexity is down to tying the VM's together in a performant and secure network design. That's no network virtual appliances between the database and application servers, although network security groups are permitted in this communication path as long as legitimate traffic is not impeded or redirected.

There are several variations when it comes to high availability, and they mainly differ in the level of availability and associated costs. As I said earlier, this is based on a distributed system, so that it is each component that is replicated rather than a mirror of the primary system, although you could argue that the net result is the same when talking about geo-redundancy. Database servers replicate to mirrors, while application server VMs are deployed in a failover cluster, or workloads are distributed amongst multiple servers through a load balancer. The specifics of high availability are dependent on the components and deployed operating system. 

SAP does support two methods of running multiple databases on one virtual machine, although this isn't recommended in a production environment. Multiple Components on One Server (MCOS) is analogous to running multiple instances of MS SQL Server on one machine where each instance hosts one database. Multiple Components on One Database (MCOD) hosts multiple databases on one database server instance – a common scenario to those familiar with most relational database systems.

In contrast to databases, having central services instances sharing a virtual machine isn't such an issue from a resource use perspective, as central services aren't so resource hungry. A Shared or stacked central services configuration is called multi-SID. There are several ways this scenario can be implemented, and it very much depends on the VM's operating system. Windows servers use Failover Cluster functionality, and Redhat and SUSE Linux distributions enable high availability with the Pacemaker HA extension out of the box, while this feature needs to be installed for Oracle Linux. All of the central services clustering solutions share a similar architecture. There are multiple VMs in a cluster, whether that is a Windows Failover or Pacemaker cluster, with each VM having access to high-speed shared persistent storage. Depending on the scenario, that storage can be Azure Shared Disk, shared disk with replication, or NFS shared volumes. Each of the clusters is fronted with an Azure internal load balancer.

S/4HANA is the latest iteration of SAP's business solutions, and as the name suggests, is a redesign to work with SAP HANA. One of the key changes is the user interface, which has gone from an SAP GUI client to SAP Fiori, a browser-based interface. While the dedicated SAP GUI client can still be used with S/4HANA, deploying the newer SAP Fiori system will significantly impact system architecture from the database through to the frontend. When used exclusively with S/4HANA, in what's referred to as an embedded deployment, the HANA database server can host the Fiori database using SAP HANA Multitenant Database Containers. Making further use of the already provisioned SAP HANA infrastructure will decrease overall costs. This will have minimal impact on the SAP HANA main database as the Fiori system mainly functions as a proxy between the UI and the back end, with little database IO. The Fiori system can be deployed as a standalone component with its own database and application servers, connecting to multiple SAP back-end systems in a hub deployment. An SAP Fiori deployment must also support high availability to ensure overall operational resilience. There's no point in having a redundant back-end system that survives an outage if the users can't access it. 

As a browser-based UI, Fiori adds another layer in the form of the SAP web dispatcher to the system front end. The virtual machine or machines hosting the SAP Web Dispatcher need to be situated in the DMZ and have two network interfaces. One NIC for user traffic and one for interacting with the Fiori server. Network security groups should be associated with the subnets straddling the web dispatchers to provide additional protection against malicious connections and traffic. As with the Fiori servers, the web dispatchers should be deployed in a highly available configuration to ensure end-to-end resilience utilizing a load balancer between them and the Internet. In this situation, Azure Application Gateway is the ideal public-facing solution as it incorporates a multitude of security and routing features. 

Use the SAP router application to enable TCP/IP access to an SAP instance hosted in Azure from on-premises when there isn't a direct IP network connection. The SAP router is a proxy, usually installed on the firewall host, that listens on port 3299, making connections to the remote system with a user-supplied connection string that specifies the host.

When it comes to licensing an SAP NetWeaver product on Azure, the process is very similar to an on-premises situation, except there are, as you can imagine, some differences in the hardware key infrastructure validation. The SAP hardware key is that of the virtual machine hosting the message server. 

In an on-premises situation, the Windows hardware key is derived from the computer secure ID and local hostname, while on a Linux box, it's the MAC address of the first valid network card. For both OSs, this is the same for physical and virtual machines. If you change a Windows server hostname or SID or the primary NIC on a Linux box, the SAP license will become invalid. On Azure, the VM Unique ID is added to the parameters to calculate a new SAP hardware key for a Windows server, while the Azure VM Unique ID replaces the MAC address parameter for Linux. The Azure VM Unique ID can be retrieved from within a VM or via Azure PowerShell or CLI. Each newly created VM will have a new unique ID no matter what method is used to create it. This has ramifications for VM backups as they don't store or keep the VM ID. When you restore a VM, you are essentially creating a new instance with a new ID and thereby invalidating the SAP license. The VM ID won't change when the virtual machine is rebooted, shutdown, started or stopped, deallocated, undergoes service healing, or is redeployed to a new host. 

To get the current hardware key run the saplicense program with the -get parameter from the command prompt as the <sid>adm user.  Here we can see the hardware key returned, but if I add the TRACE=2 option, a file is generated, dev_slic as in dev SAP license. Looking inside the license file, you will find the VM ID. This is not the whole file, just an extract with the ID. The same commands are used on a Windows machine. In this dev_slic file, we can see the VM instance ID, and below that, the long identifier starting with S are the parameters that go into calculating the hardware key. It begins with the Windows computer SID, followed by the hostname and then the VM ID. You can also use the Get-AzVM PowerShell command, specifying the resource group and VM name to get the VM ID. Again, this is just an extract of the output. 

 

About the Author
Avatar
Hallam Webber
Software Architect
Students
14756
Courses
27
Learning Paths
3

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.