Deploy and Migrate an SAP Landscape to Azure
The course is part of this learning path
After planning and researching the migration of an SAP landscape to Azure, words must become action. In Deploy and Migrate as SAP Landscape to Azure, we look at how crucial infrastructure components can be deployed and configured in preparation for migrating servers and data from "on-premises" to the Azure cloud.
This course looks at deployment and migration options and tools and services available within the Azure and broader Microsoft ecosystem that will save you time and effort. We touch on SAP-specific issues you need to be aware of and general best practices for Azure resources deployment. The Deployment and Migration course builds on Designing a Migration Strategy for SAP and Designing an Azure Infrastructure for SAP courses.
- Understand the methods for deploying VMs and prerequisites for hosting SAP
- Learn about ExpressRoute, Azure Load Balancer, and Accelerated networking
- Understand how to deploy Azure resources
- Learn about Desired State Configuration and policy compliance
- Learn about general database and version-specific storage configuration in Azure
- Learn about the SQL Server Migration Assistant and Azure Migration tools
This course is intended for anyone looking to migrate their SAP infrastructure to Azure.
Settling on the required servers for your SAP environment, whether virtual or bare metal is just one ingredient. Network design for both the finished landscapes and the migration process is also of critical importance. Networking covers application server access to database servers, user access to the SAP system, load balancing, high availability configuration, and any one-time or temporary configuration for migration.
There are lessons learned from many past SAP deployments that are considered best practices.
- Virtual networks that host SAP systems should not be exposed to the internet.
- VMs hosting SAP databases should be in the same virtual network as SAP application servers but in different subnets. If you choose to place the application servers in a separate virtual network from the database server, which isn't recommended, then those virtual networks must be peered. As a side note, traffic between peered networks is not free and has the potential to incur significant costs with large volumes of transferred data.
- Within a virtual network, VMs should have a static private IP address. The address should be assigned to the NICs through Azure and NOT via the VMs operating system.
- Assign multiple NICs to database VMs to isolate traffic. Each NIC has its own IP address and is assigned to a different subnet. Then the subnet's network security group rules handle the routing.
- Multiple NICs assigned to VMs means you can also one set to DHCP. The Azure Backup service needs the primary NIC to be DHCP.
- Firewall rules should NOT be set up on the database VMs to enforce routing. Network security groups should be used to implement traffic routing.
- Do not place a network virtual appliance between the database and application servers. The communication path between the DB and application layers must be direct as possible to ensure overall system performance isn't compromised. Network and application group security rules can be placed between these two layers as long as they don't impinge on direct communication.
There are 4 main ExpressRoute connection methods. The first 3 of the service provider model type means you are connecting through a third-party like a telecommunications provider.
- Cloud exchange co-location is where your current hosting provider has a cloud exchange service that enables them to connect directly to the cloud provider – in this case, Microsoft. A cloud exchange cross-connection to Azure can be either layer 2 or managed layer 3.
- Point-to-point Ethernet connection allows you to connect to Azure via a layer 2 or managed layer 3 point-to-point ethernet link.
- Any-to-any (IPVPN) connection is when you integrate your wide area network with Azure by establishing a Multiprotocol Label Switching VPN connection to an ExpressRoute peering partner.
- Direct from ExpressRoute allows you to connect directly to Microsoft's global network via specific peering locations. This relatively new model offers the largest potential bandwidth – up to 100Gbps.
The on-premises connection to the ExpressRoute circuit should have at least a 1Gbps bandwidth, and preferably higher. The ExpressRoute circuit is connected to the hub of the virtual network through an ExpressRoute Gateway.
The ExpressRoute virtual gateway can be provisioned using one of three SKUs, Standard, High Performance, and Ultra Performance. Here is a table of those SKUs with their product codes and performance metrics. As you can see, there is a significant performance step between each gateway variant. Most notably, only the Ultra SKU supports FastPath. The FastPath feature, when activated, bypasses the gateway, sending network traffic straight to VMs in the virtual network, improving data throughput performance. The performance numbers presented here represent what is possible under ideal conditions.
You can change a gateway SKU using the Resize-AzVirtualNetworkGateway cmdlet, specifying the gateway and the new size with either the SKU name or product code. The basic gateway SKU has been deprecated.
Prior to creating an ExpressRoute Gateway, you need to deploy a subnet containing the IP addresses to be used by the associated virtual network VMs and services. This subnet must be called "GatewaySubnet" so that Azure will deploy the associated VMs and services to the correct subnet. It is recommended not to associate any network security groups with the Gateway Subnet. Doing so may cause the gateway to exhibit unintended behavior, that is, stop working correctly. When you create the ExpressRoute gateway, you need to tell it how many IP addresses it should contain by using the CIDR or "forward slash" notation like 10.0.3.0/27.
When it comes to using ExpressRoute Global Reach for HANA System Replication without additional firewalls or proxies or copying or refreshing data between HANA Large Instances in different Azure regions, there are a couple of network factors to keep in mind. Firstly, you must provide a 6 IP address range, i.e., CIDR /29, that doesn't overlap with any address ranges in your Azure or on-premises network. You cannot broadcast any on-premise routes with private Autonomous System Numbers between 65000 and 65020 inclusive and 65515. Connecting directly to a HANA Large Instance from on-premises does incur significant charges, and you should consult the latest Global Reach Add-On pricing. Setting Global Reach up for either of these scenarios will require opening a HANA Large Instance support request.
An ExpressRoute circuit is highly available with the built-in redundancy of dual connections from the provider partner. It is up to the customer to make sure that their connection to the ExpressRoute provider doesn't become the single point of failure.
When you use private virtual IP addresses, you will also need to configure an Azure Load Balancer to enable database high availability with replication using HANA System Replication or SQL Server Always On features. While there are two load balancer products available in Azure, Basic and Standard, it is recommended to use the standard load balancer. The basic load balancer is quite restrictive and is unsuitable for production deployments. Azure standard load balancer supports up to a thousand instances, over three times more than the basic load balancer. TCP connections stay alive even when all the health probes are down. The standard balancer supports availability zones and high availability ports for internal load balancing. Most management operations will complete in under 30 seconds, which is about twice as fast as the basic balancer. There is no SLA with the basic balancer but a 99.99% SLA for the standard balancer. While the basic load balancer is free and the standard one isn't, it's really a case of you get what you pay for. In the context of SAP and performance in general, the main advantage of the standard balancer is that traffic is not routed through the balancer itself.
Under most circumstances, VMs hosting an SAP system should not be exposed to the internet with a public IP address. They are in an internal backend pool with no outbound connectivity. Having said that, there are some situations where outbound connectivity is required. For example, Azure Backup and Azure Site Recovery connectivity, patching the OS from a public repository, although best practice says to use an OS patching server. There are several options for enabling outbound-only traffic to a public endpoint.
You can use an additional load balancer for public traffic with outbound only rules. You can connect multiple virtual machines belonging to one subnet to enable outbound connectivity. Network security groups can then be used to enforce which public endpoints the VMs can access. This can be done at either the subnet level or for each VM. If the public endpoint is an Azure service, you should use the appropriate service tag in the Network Security Group to simplify rules rather than specific IP addresses. Services tags are constants that represent groups of IP address prefixes for specific Azure services, for example, AzureBackup, AzureActiveDirectory, and Storage.
The network security group will manage inbound traffic while rules on the load balancer take care of outbound traffic. At this time, it's not possible to set an outbound rule on an Azure load balancer through the portal. However, you can set one using the Azure CLI. This is what an outbound rule would look like. One important caveat with the public load balancer solution is that logging outbound traffic through a third-party auditing tool may not be possible.
Another option is using an Azure firewall to enforce outbound rules while restricting inbound internet traffic. This solution also has a number of quirks. The firewall must reside in a subnet called AzureFireWallSubnet. Cost may be an issue when it comes to transferring large amounts of data, so it's best to consult firewall pricing to see if this will be a showstopper. Like the public load balancing solution, implementing a centralized solution for auditing outbound traffic might not be possible.
A third option that can be implemented in conjunction with Pacemaker clustering is using a proxy to enable outbound calls to the Azure management API public endpoint. There are a bunch of issues to consider with this scenario. First and foremost are latency and availability. Pacemaker is a high availability solution, so any introduced element that detracts from that core functionality will be detrimental and could ultimately be a point of failure, or at least ongoing frustration. A hybrid design of an on-premises proxy and an Azure hosted Pacemaker cluster could introduce too much latency. A proxy is a reasonably complex solution, so unless a corporate proxy is already in place, then implementing a proxy solely for this purpose might not be worth the added complexity and extra cost. The proxy will only handle HTTP and HTTPS calls, so if some outbound traffic requires other protocols, this solution won't work. The proxy must be configured for outbound traffic to management.azure.com and login.microsoftonline.com.
It is highly recommended to enable accelerated networking to reduce network latency and increase transmission speeds; up to 30Gbps. Accelerated networking reduces jitter and CPU utilization, as there is no processing of network traffic by the virtual switch on the host. Only a subset of VM SKUs support accelerated networking. Those SKUs belong to the following series.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.