This course is focused on the portion of the Azure 70-534 certification exam that covers designing an advanced application. You will learn how to create compute-intensive and long-running applications, select the appropriate storage option, and integrate Azure services in a solution.
Welcome back. In this lesson we'll be talking about Compute Intensive Applications. There are a lot of tasks that require massive amounts of computational power to complete. To name just a few, engineering simulations, genome analysis, financial modeling, and video rendering. And with tasks such as this, the more computing power that you can throw at it, the faster the process will be completed.
Now, as you know, the maximum size of a single server is constrained by its hardware. What I mean by that is that an individual server will only perform as well as the best current hardware. So that means you may need to use the compute power of multiple servers all working on the same problem to accomplish the task. When it comes to actually doing this, there are different options. However, there are two main categories that we're gonna describe. We have Embarrassingly Parallel and Tightly Coupled.
Embarrassingly Parallel applications consist of separate executables, or distinct services, that can work on their own jobs without needing to communicate with each other. And this method allows you to add and remove instances as needed which will increase or decrease the total required computational time. Tasks such as software testing, media encoding and image processing are examples of Embarrassingly Parallel.
Tightly Coupled applications require compute nodes to interact or exchange intermediate results. So unlike Embarrassingly Parallel, the nodes need to be able to communicate. And they tend to communicate via Message Passing Interface, abbreviated MPI. MPI is the de facto standard to exchange messages between nodes in parallel computing. When you need to exchange information between nodes, this can become a bottleneck. So, you can use RDMA, or Remote Direct Memory Access, to improve performance. Tightly Coupled tasks include things such as weather forecasting and engineering design and analysis.
So, the logical question is, how does Azure help with computationally intensive tasks? The reality is that cloud computing platforms thread so much computational power that there are a lot of potential ways to handle HPC needs. HPC stands for High Performance Computing. And while there are a lot of options, the four we're going to cover here are going to be Hybrid HPC Cluster, and Azure Based HPC Cluster, and Azure Batch, as well as custom solutions using the Competing Consumer Pattern.
Let's start with Hybrid HPC Cluster. Microsoft has long since offered the HPC Pack so that you can run your own HPC tasks on-prem. The only problem is that if you need more computing power to your on-prem cluster that means buying new hardware. This would require an upfront purchase and that can be a blocker especially if you're only planning on using it for one off tasks. So, this is where the Hybrid approach helps. It allows us to add in additional resources as needed without buying new servers. The head node will live on-prem, and the cloud nodes will just serve as extra compute. If you already have an on-prem HPC Cluster, this may help to extend that.
The next option is Azure Based HPC Clusters. This is basically the same thing as the Hybrid option in terms of setup, except we set the HPC pack up in Azure. We run this inside an Azure virtual network, and everything lives in Azure. And so this allows us to shut down all of the nodes when we're no longer needing them, and again, with no upfront investment, this is a pretty cost-friendly architecture, except that it does require an investment in setup and some maintenance.
So, the third option, is the platform as a service offering called Azure Batch. And this is an option to run applications in parallel and it'll scale to meet the demand. It takes the management out of setting up an HPC Cluster, which will save us money. Batch works really well with Embarrassingly Parallel tasks, however, it also supports MPI, should we need to share some data between nodes. Batch has an API for .NET, Java, Python, Node.js, among others, and these APIs are how you'd programmatically manage pools of nodes as well as scheduling jobs.
Okay the final option is a custom solution based on the competing consumer pattern. And the idea here is that you have one or more sources putting some unit of work onto a queue and then you can have a pool of one or more consumers. To give a better example. You could take a collection of images and put them into a queue, then you'd have a pool of consumers that would be responsible for grabbing an image off a queue and performing some image processing. If, for some reason, we have a failure, we just wouldn't remove that item from the queue and it's gonna be picked up by another consumer. And this pattern allows us to scale the consumer pool independently of the pool of resources that are putting that work onto the queue.
So, that's just four potential options out of the many possible options, and, of those, Azure Batch requires the least management effort.
Okay, now, you may have wondered about the actual virtual machines themselves. Azure has a set of virtual machines that are tailored to HPC. There are four options in the A family that are specifically for HPC. The A8, A9, A10, and A11. The difference being that the A8 and A9 implements RDMA, which is Remote Direct Memory Access, which is going to allow you an extremely fast inter node communication mechanism over the network.
Okay let's check out how to set up a sample Azure Batch application. So, we're gonna start out in the portal, and, as most of these things do, we'll start with the New button. And then we'll click on Virtual Machines, and we're gonna scroll down just a bit to Batch Service. So, we have a form to fill out on this blade. We need to provide an Account name. Okay. And we also need a Resource group, and we'll use one of our existing resource groups. And, we're gonna need a Storage Account. And there we go, perfect. And now, we'll click Create, and it's gonna take just a moment to complete, and then we're gonna have our batch account.
Okay, now that that's done, the first thing we're going to need to do is create an application. Under the Features section, we'll click on Application. And we're gonna click on the Add button. So, we have a sample application here, and this is what we're gonna use for this demo. If you're familiar with C Sharp, you're gonna recognize this as a very simple console application. All it does is write out a string to standard out and that string contains the date and time it was run.
So, let's go back into Azure and set the Application id, and we're gonna call it HelloBatch. And we'll need a Version, and we can call this one 1.0. And now, we need to upload our actual application package, and all that means is that it's expecting a zip file with all of the files required to run our application. Now, we click on the app, and we set the Default version here. Okay, great, let's save that. And next up, we need some servers to actually run our batch job. So, we select Pools under Features, and click on the Add button. Okay, we'll need to name our Pool, so let's give it a name. And this is where we can change things like the type of OS, the version it is, and things like that. But we'll leave it here by default. But we do need to select a pricing tier for our nodes. Let's go with A1 for this demo. And okay.
And now we need to determine how many dedicated machines we'll use. So, this is just a demo, so one is gonna work fine. Okay, notice that we have some parameters here. These are gonna allow for additional things such as scaling. We'll leave these by default. And it's gonna take some time to spool up the server, so while that's happening, let's go create a Job.
And so we'll click Add, and let's name it, HelloBatchNow. We're naming it that because when you create a Job, it's executed immediately. If you want to schedule one, you can do that, but you're gonna have to use the Job Schedule. So it's an option if you need to schedule it. We're just gonna run it right after we've created it. Okay we need to select the Pool that this job will execute in. And we only have the one so this is going to be an easy choice. And we click OK. Alright, our Job is created, however it needs to actually know what to do. So we need to give it a Task, and for this we'll give that a name, and as you know, naming things is one of the hardest things in computer science. Give it a name there, and let's paste in a Command for this task. We're gonna have it run our application and we'll need to use this environment variable for our directory that the application lives in.
Okay, so now let's scroll down to the Application package. And this is how our task knows which Application and Version to use. So let's give it just a moment to complete. We'll click on Refresh and there it is, it's complete. Let's check out the output of this. Remember, we're expecting a string, and at the end of that string we'll see a date time appended. So we're gonna click on Files on node, and select the stdout.txt file. Then we're gonna select that, and there's the output that we were expecting. So, this is a very basic overview of how to use Azure Batch.
In our next lesson we're gonna be talking about Long Running Processes. So if you're ready to keep learning, then let's get started.
About the Author
Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.
When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.