Deploy and Migrate an SAP Landscape to Azure
After planning and researching the migration of an SAP landscape to Azure, words must become action. In Deploy and Migrate as SAP Landscape to Azure, we look at how crucial infrastructure components can be deployed and configured in preparation for migrating servers and data from "on-premises" to the Azure cloud.
This course looks at deployment and migration options and tools and services available within the Azure and broader Microsoft ecosystem that will save you time and effort. We touch on SAP-specific issues you need to be aware of and general best practices for Azure resources deployment. The Deployment and Migration course builds on Designing a Migration Strategy for SAP and Designing an Azure Infrastructure for SAP courses.
- Understand the methods for deploying VMs and prerequisites for hosting SAP
- Learn about ExpressRoute, Azure Load Balancer, and Accelerated networking
- Understand how to deploy Azure resources
- Learn about Desired State Configuration and policy compliance
- Learn about general database and version-specific storage configuration in Azure
- Learn about the SQL Server Migration Assistant and Azure Migration tools
This course is intended for anyone looking to migrate their SAP infrastructure to Azure.
I want to use accelerated networking as an example of how you can use a PowerShell script to create resources in Azure. In this demonstration, we will create a virtual machine with accelerated networking deployed to a virtual network. I've opened up a PowerShell terminal and connected it to my Azure subscription. As we are starting from scratch, the first thing I need to do is create a resource group. So that's the new AZ resource group command with the name accelnetRG, and it will be located in the West US 2 region. Next, I'll create a subnet configuration and store that in a variable subnet. OK, so that's a little bit disconcerting, but not to worry, that's just a warning saying that this command will change in the next breaking change release but is still supported in the format I'm using. If you want to suppress those messages, you can use the following set item environment command, suppressing Azure PowerShell breaking change warnings to true. Now I will create the virtual network with the name accelVnet, specifying my resource group name and the subnet with the subnet variable. I'm also going to need a security rule to allow me to connect to the virtual machine through the virtual network via Remote Desktop. I'll create that rule and store it in an RDP variable. Next, I'll create the network security group and specify the rules with my RDP variable, and the security group will be stored in an appropriately named variable NSG. Once again, we see the warning messages as we associate the network security group with the virtual network using the set AZ virtual network subnet config command. That command gives us a JSON summary of the virtual network. We can see the subnet that's been created, and at the bottom, we can see that the virtual network has not been peered with another one. It would be quite concerning if it had been. To access the virtual machine, I will need a public IP address which I will create, letting Azure decide what it will be with the dynamic allocation method parameter. We set up the accelerated networking when we create the network interface with the parameter enable accelerated networking. The subnet and public IP addresses are both referenced through variables those resources were assigned to at their creation. Now that the virtual network has been set up, I'll create the virtual machine. First off, I need a credential to log into the virtual machine, and I'll do that with Get-Credential. That pops up a username and password dialog for me to enter the details. The next few commands are all about configuring the virtual machine. So we are creating configurations with the New-AzVMConfigure command specifying the machine's name and its SKU with the VMSize parameter. What we're doing is adding details to the configuration variable. Here I'm setting the operating system and the credential with the cred variable. I'll use the Set-AzVMSourceImage command to tell the configuration that I want a 2016 Windows Server VM. Now I'll add the network interface to the configuration using the ID from the NIC variable that the network interface has been assigned to.
And last but not least, let's create the virtual machine with the New-AzVM command specifying the configuration, the resource group, and the location. Now that will take a little while to deploy, but when it's finished, we'll head over to the portal so we can log into the virtual machine and confirm that accelerated networking has been installed. It's just a case of going to the virtual machine, clicking connect, and selecting RDP for Remote Desktop. I'll enter the same credentials as I used when creating the VM. Once I'm in, I want to open up device manager and check for the existence of a Mellanox ConnectX-3 Virtual Function Ethernet Adapter. And there it is, sitting under network adapters. There is not much automation involved in running all these commands one by one. You can put them in one script file and run that, but Azure has even smarter ways of automating these types of tasks, and that's what we will look at shortly.
As I've said, Azure resources can be created and deployed using JSON formatted definition files called ARM templates. In keeping with the theme of infrastructure as code, these templates can be versioned and stored in code repositories. Alternatively, Azure Automation, which can be likened to a specialist repository for resource configuration, can be used to manage and deploy resources. When a new resource deployment is required, a template or templates can be run as part of an automated resource creation process. In the case of Azure Automation, this is via Runbooks that typically execute PowerShell scripts, although there are several ways to define how a Runbook should execute tasks. You can create a Runbook with the graphical editor in the portal or use Python scripts.
A detailed explanation of Azure Resource Manager and associated templates is beyond the scope of this course, but we do have an introduction to Azure Resource Manager course, to which there is a link in the description below. Now I want to give you a very brief overview of ARM templates and how to use them to deploy resources in the context of the accelerated networking scenario we just looked at. In a nutshell, Azure resource manager, as the name implies, is the underlying mechanism for managing the life-cycle of resources and services in the Azure cloud. Some of those AZ commands we saw in the PowerShell scripts can be thought of as one of the interfaces into the Azure resource manager. An ARM template, which is a JSON description of a resource, is another interface. One of the advantages of using ARM templates is that it enables consistent and repeatable deployment with little or no manual input after their initial creation. Another advantage is that Azure resource manager takes care of all the orchestration dependencies within a template. This means you only have to define what you want and not worry about the resource creation order.
OK, let's dive into creating a Linux VM with accelerated networking using ARM templates. Here we have a template file, AccelNetwork.json, on the right, with the earlier PowerShell script on the left. The template has a resources section where the resources are defined. I know this looks a little daunting, as in "how will I know all the keywords for defining all the different resources and parameters." The good news is you don't. The Azure marketplace is full of pre-defined templates you can modify, or you can create a resource within the portal and download the template definition. Within Visual Studio code, there is a template extension that provides excellent IntelliSense for template creation, which I used to build these templates.
The first resource defined here is the public IP address, and we can see a close relationship with the PowerShell new-AZ public IP address command on the left. One of the significant differences you will notice is that the resources in the template use parameters for property definitions as opposed to the hard coding that we see in the PowerShell script. Next, we have the network security group with the security rules defined as an array of rules within the group's properties. The parameters, in this case, the network security group name, comes from the parameters section at the top of the template. You specify the data type and a default value if none is specified. Also, notice the schema at the very top, which says deploymentTemplate.JSON. You don't have to type all the parameters in when you submit the template. You can provide a parameters file as we have here with the values to be injected into the template. Here the schema is deploymentParameters.JSON. Going back to the deployment template, we can see another section called variables, which should really be called constants from a programming point of view. At the very bottom is the network interfaces resource. We can see the enable accelerated networking property set to true. I've separated out the virtual machine definition into another template file just because this logically makes sense. We can see that it follows exactly the same format of parameters, variables, and resources. A template can have another section called functions, which are essentially user-defined functions, but that is beyond the scope of this course. To keep things nice and simple, I've gone with password authentication in terms of logging into the VM. The password parameter is defined as a secure string, and in the VM parameters file, instead of specifying the password, I'm referencing a key vault where the password is stored as a secret. Next, we'll see how to deploy these resources using an automation account.
The next task is turning the templates into the resources they define. If you are from a DevOps background, then it's perfectly OK to use CI/CD pipelines connected to a template repository as a way of tracking and deploying infrastructure as code. However, Azure automation is a service expressly set up to build and deploy resources, configure virtual machines, and monitor and track changes to infrastructure as code in the form of Azure Automated State Configuration.
The first thing I need to do is create my key vault for storing the password to log into the VM. In the portal, I will create a new key vault with the imaginative name of SAP deploy key vault, and set the region to West US 2. Make sure to enable access for virtual machine deployment and resource manager template deployment. I'll leave networking as is and create the vault. Now that the key vault is created, I'll just go into secrets and create a new secret password with the name VM password. Next, I'll create a storage account that I can upload my templates to. Clicking premium performance will give me access to a file storage account. All the rest of the settings I will leave as defaults and just create the account. I'm going to need a file share within my storage account, but I will create it through PowerShell scripts, so I'll head back to my PowerShell command prompt. First of all, I'll get a key to my storage account with Get-AZStorageAccountKey. Then the storage context, which I will use in the New-AZStorageShare command to create the file share. The template file variable is going to hold my VMtemplate.JSON file, and the Set-AZStorageFileContent command will upload that file to the newly created file share, and there it is. I'll repeat that process for the VMparams, Accelnetwork.json, and networkparams.JSON. With the key vault and file storage set up, I can now create the automation account. This involves going into automation accounts and clicking the create button. I'll just give it a name and click create. For creating resources, I want to use Runbooks under process automation. As you can see, there is a variety of runbook types, but I'm going to stick with PowerShell and upload to runbooks using the PowerShell command prompt. I could create a runbook from scratch with the built-in editor or use the import a runbook function to upload the file, but where's the fun in that. Back at the command prompt, we have an import params variable that holds the PowerShell script, the resource group name, the automation account name, and the type of runbook, which of course, is a PowerShell script. First off, import the runbook script, then publish the automation runbook with the name VMDeploy. And there it is in my runbook list. It's a case of rinse and repeat for the virtual network runbook. So, what does a PowerShell runbook look like? Going into VirtualNetworkDeploy and clicking edit, we can have a look at the script. At the top, we have three mandatory input parameters that need to be filled in every time the runbook is executed. Then, as the comment says, we need to authenticate with Azure. After that, we need to get the template files with the storage account context and then download each of them with Get-AzStorageFileContent from the template share, downloading them to the temp folder on C:\ drive with the force parameter to overwrite the file if it already exists. We then create a variable with the fully qualified path for each file and then execute the New-AzResourceGroupDeployment command with the resource template and parameter files to create the resources. Looking at the VMDeploy runbook script, we can see that it is exactly the same except for the template file names. Obviously, I could have used the same runbook and have two extra parameters to specify the file names, but I was lazy, and I couldn't be bothered typing extra parameters when executing the runbooks. The third parameter is the storage account key which I'll need to get before executing runbooks. Let's head over to SAPdeployshare and grab key one from the access keys. OK, now that I have that, let's create the virtual network with the runbook. It's just a case of going into the runbook, clicking the start button, filling in the parameters, and clicking OK. That executed quite quickly, which makes me suspicious that something has gone wrong, so let's look at the errors tab. It appears that Azure automation is not aware of resource manager commands. Azure automation is incredibly flexible and can be used in hybrid situations, with on-premise and even other cloud providers, but it does mean that not all functionality is automatically present. I need to load the appropriate modules through shared resources down there at the bottom left. We can see Azure RM modules currently under modules, but what we want are the AZ modules that contain the commands in the scripts. I'll go into browse gallery and import the appropriate modules. The first error occurred when trying to authenticate with Azure, so I'll need the AZ.Accounts module. Next, I'll need the AZ.Storage module for accessing the template files. Finally, I'll need the AZ.Resources module to run the actual deployment. Right, let's give that another go. This time it was successful, and the network components have been deployed. We can go over to the resource group and see our virtual network. We've got the NIC, network security group, the public IP address, and the virtual network. Let's now deploy the VM. It's a case of rinse and repeat, and again a successful deployment. Back in the resource group, we can see the VM and its disc. As with the previous windows PowerShell example, we need to check that the accelerated network interface has been successfully deployed. This will involve logging into the VM using a bash console. Listing the PCI components with LS PCI shows us the same Mellanox ConnectX-3 ethernet controller indicating accelerated networking functionality.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.