image
Networking

Contents

Azure Infrastructure for SAP Workloads
1
Introduction
PREVIEW2m 44s
2
Project Plan
PREVIEW6m 51s
4
Storage
11m 3s
5
Networking
16m 17s
7
Summary
3m 8s
Start course
Difficulty
Intermediate
Duration
58m
Students
264
Ratings
5/5
starstarstarstarstar
Description

Designing the Azure infrastructure for SAP workloads is a crucial and fundamental step in the migration process. SAP systems are complex and demanding, relying on high-speed and high-capacity hardware and networks.

In this course, we look at the SAP-specific Azure products and features, as well as how generic Azure services can be utilized to architect a high-performance, resilient and secure environment to host SAP workloads. Microsoft has been a provider of SAP hosting infrastructure for many years, and as such, offers a range of solutions for hosting very modest landscapes to the biggest in the cloud.

Learning Objectives

  • Understand the elements of a migration project plan
  • Learn about the SAP-certified compute options available on Azure
  • Learn about the Azure storage options available and which ones to use for various scenarios
  • Understand how to create a performant SAP network using Azure
  • Learn about different SAP system deployment models in the context of infrastructure design

Intended Audience

This course is intended for anyone looking to understand the Azure infrastructure options available, certified, and suitable for various SAP deployments.

Prerequisites

To get the most out of this course, you should have a basic understanding of Azure and SAP.

https://cloudacademy.com/course/assessing-your-current-sap-landscape-1567/

 

Transcript

Before we dive into SAP-specific networking topics, I want to do the briefest of brief overviews of networking in Azure. For the most part, Azure networking is based on the virtual network, often abbreviated to vNet. Explicitly or not, most Azure resources, like virtual machines, reside inside a virtual network. The default behavior of vNet is to allow outbound traffic from resources within. Inbound traffic can be directed to a resource if it has been assigned a public IP address. Apart from virtual machines, a public-facing resource can be a load balancer or some other network-related service. When you provision a virtual machine, you will also have to provision a virtual NIC – network interface to connect the VM to the vNet. NICs are independent resources that can be assigned to virtual devices other than VMs, like gateways. A virtual machine can have more than one network interface, but if it does, one of the NICs will be designated as the primary one. When I said, the NIC allows a VM to connect to a vNet that's not strictly correct as a VM connects to a subnet within the vNet. One subnet is automatically created when you create a vNet and defines a smaller range of addresses within the vNet's address space. The concept of dividing a virtual network into smaller segments known as subnets is crucial from a network performance and management perspective. Each subnet in a vNet has a routing table that is automatically created and populated with default system routes. You can't delete system routes, but you can override them with custom routes.

When a vNet is connected with another vNet or an on-premises network, the address range becomes part of the entire multi-network address space. This means you need to make sure the address ranges of the interconnected networks don't overlap.

There is no security boundary between subnets within a virtual network, so VMs on different subnets within the same vNet can talk to each other. Artificial security boundaries between subnets can be implemented using network security groups (NSGs). A network security group has inbound and outbound security rules, not dissimilar to firewall rules, with each rule having a priority that determines the order of its application. An NSG has a set of undeletable default rules with low priorities that can be overridden by custom rules. A network security group can be associated with subnets and network interfaces.

This brings us to the why of virtual networks and subnetting. Setting up different vNets with their own address spaces is ideal for separating dev, test, and production environments. The ability to divide a vNet up into subnets and then apply security rules to all VM's within a subnet using network security groups gives you much better control and easier management over access. Subnets with small address ranges will be more efficient.

From this exceptionally brief overview of virtual networking, you get the idea that cloud hosting is more than just VMs sitting in a refrigerated datacenter. In fact, Azure has all the components you need to build a datacenter of your own, a literal virtual datacenter. Along with the VMs and virtual networks, we have gateways, load balancers, firewalls, DNS servers, and more. In theory, you can mix and match Azure networking, computing, and storage resources to architect any kind of virtual datacenter. There are some basic guidelines that you should follow, so you don't end up with a malformed Frankenstein design.

Let's start with the component categories or functions of a virtual datacenter. First and foremost, our raison d'être or reason for being, is the workload, the work or job we want to accomplish. Next, we have the infrastructure that facilitates a workload and ties multiple workloads together to operate as a unified system. Because not all systems are open source and not everyone is honest, we need a way to segregate our virtual datacenter from other cloud users and protect and secure it from the wider Internet. Perimeter networks and security components such as firewalls, gateways, and network virtual appliances help us achieve that. Once the virtual datacenter is up and running, like any complex system, it needs to be monitored to ensure it is operating efficiently and that the appropriate maintenance and remedial actions can be applied in a timely fashion.

With these four basic precepts in mind, let's now turn our attention to how infrastructure components should and could be combined. While a virtual datacenter could consist of one virtual network, typically, they are made up of multiple vNets that segregate functionality and are based on a standardized design pattern called a topology.

In a mesh design, all virtual networks are connected to each other. While this design is flexible and open in terms of access from one vNet to another, implementing permissions and security will be more complex and maintenance intensive due to no central location or hub as a common access point. 

The hub and spoke network design have become the de-facto topology for SAP deployments. The hub is a virtual network that serves as the connection between the SAP workload spokes and the on-premises spoke. As such, it contains much of the ancillary infrastructure for securing and managing the network. The hub is where you'll place resources like DNS, NTP server, security devices like firewalls or Network Virtual Appliances, intrusion detection and prevention services, Azure Active Directory services, and virtual network spokes hosting the SAP workloads. You can use different spokes for production, QA, and dev workloads, connecting to the hub vNet using vNet peering. The preferred and recommended method by Microsoft to connect to the on-premises network spoke is an ExpressRoute circuit. However, it is not uncommon to start with a site-to-site VPN as it's faster to implement. ExpressRoute offers better reliability, security, higher bandwidth and throughput, and possibly lower latency. It is beneficial to use a Jumpbox server as the point of ingress for the hub and workload spokes.

Where things start to get interesting is when we look at a HANA large instance spoke. An HLI is connected to your application servers via another ExpressRoute circuit. In essence, the bare metal server running the HANA instance is treated like an on-premises environment with a twist. For complete network isolation, the internal to Azure ExpressRoute circuit and the external circuit do not share their routers. To achieve the required low-latency network performance, you must use the top-end Ultra Performance ExpressRoute gateway product in conjunction with proximity placement groups. The Ultra Performance gateway has a feature called FastPath that allows network traffic to bypass the gateway and be sent directly to the application server VMs. FastPath is the gateway you have when you don't want a gateway.

As you can see, the internal HANA traffic, represented by the red line, is being routed from the enterprise edge servers directly to the application subnet within the vNet workload spoke. It's important to note here that the HANA Large Instance is not connecting to the hub but the workload spokes. Bypassing the central hub is necessary to deliver high-performance functionality as user defined routing rules are not currently supported by FastPath. In a single SAP system scenario, the ExpressRoute gateway should not be a network bottleneck.

However, putting multiple systems like dev, QA, and production in one virtual network, separated by subnets, could result in the gateway becoming a bottleneck, especially when doing large data transfers from the HANA Large Instance. A divide and conquer strategy of setting up additional vNets and distributing SAP systems amongst them and a vNet dedicated to backups will mitigate bottleneck issues.

As we can see, the HANA large instance is not directly accessible from an on-premises computer, say running SAP solution manager, because of transitive routing restrictions, that is, routing traffic from one virtual network to another via a third vNet. A similar routing issue also applies to HANA Large Instances running in different Azure regions, which has implications for HANA system replication. However, we will look at how to circumvent this infrastructure hurdle using ExpressRoute and Global Reach.

Transitive routing restrictions prevent direct access to a HANA large instance from an on-premise network or a large instance deployed in a different Azure region. However, there are workarounds to enabled remote access from afar. One solution is via a reverse-proxy utilizing a third-party solution like F5's BIG-IP virtual edition traffic manager and NGINX, both available from the Azure marketplace. It can be deployed as a virtual firewall/traffic routing solution into the virtual network between the HLI and on-premises networks.

An Azure firewall can be used to route traffic between on-premises networks and an HLI.

Alternatively, you can deploy a Linux VM to the virtual network between the HANA Large Instance and on-premises networks. Within the VM, define IPTable rules to facilitate routing between the two networks. It's important to size the routing VM correctly to cope with the network traffic appropriately and not become a bottleneck.

ExpressRoute Global Reach can address three different scenarios. You can use it to connect multiple ExpressRoute circuits in such a way that the on-premises endpoints can use the high-speed circuits to communicate with each other in a private network. It can be used to connect an on-premise network with a HANA Large Instance in another region. You can use it to set up communication between HLIs deployed in different Azure regions. Before attempting any of these deployments, you need to request Global Reach functionality to be enabled on your ExpressRoute circuit, and just keep in mind that the Global Reach service isn't free. 

There is one important implication regarding security rules in terms of access from on-premises. ExpressRoute utilizes the Microsoft backbone and enterprise routers for speed, so it doesn't go through virtual networks. The implication is that security and network rules defined in either network or application security groups within virtual networks will have no effect. You will need to define rules within on-premises firewalls to enforce access and permissions to the HLI.

Connecting two HANA large instances in different Azure regions is the same as connecting to on-premises assets. We can clearly see in this network diagram that ExpressRoute Global Reach does not go over the Internet, so communication between the 2 HLI tenants is isolated. Like the on-premises scenario, traffic doesn't go over a virtual network, so there is no opportunity to deploy NVAs or other Azure functionality to enforce security rules. The enterprise edge routers that connect the HANA Large Instance to its local region's application virtual network are themselves also connected to each other with an ExpressRoute Global Reach circuit. 

You can use Azure Firewall threat intelligence-based filtering to block traffic from known bad sources dynamically. Microsoft threat intelligence feed is a blacklist of malicious IP addresses and domains continuously updated and utilized by Azure Firewall, Security Centre, and other services to provide dynamic and intelligent protection from external threats. If a list of known bad actors isn't sufficient, Azure marketplace has a range of products and services that allow you to inspect incoming payloads and take appropriate action. It's vital that any solution can integrate with your security information and event management system, whether that is Azure Sentinel or some third-party solution.

Azure Application Gateway is a versatile solution for public-facing endpoints as it combines security, load balancing, and dynamic scaling with a range of other features simplifying frontend infrastructure. From a protection perspective, Web Application Firewall implements core OWASP rules and supports SSL/TLS termination, allowing traffic to flow unencrypted to hosted services. In terms of routing, it allows redirection, multiple site hosting, and URL-based routing. Connection performance is enhanced with session affinity that uses gateway-managed cookies to keep a user connected to a specific server where their session state is saved. It supports WebSocket and HTTP/2 traffic, allowing for better interactive bidirectional communication with the webserver and client, without the need for polling, as is the case for HTTP-based solutions. Application Gateway Standard_v2 can scale up or down according to changing traffic loads. A Standard_v2 App Gateway is able to service multiple Availability Zones, meaning you don't need to deploy an App Gateway for each zone.

 

About the Author
Students
19600
Courses
65
Learning Paths
13

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.