1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Introduction to Azure Machine Learning Studio

Deploying a Model

Contents

keyboard_tab
Introduction
1
Introduction
PREVIEW3m 3s
Using Azure ML Studio
2
Training a Model
PREVIEW15m 8s
Conclusion
7

The course is part of this learning path

Introduction to Azure Machine Learning
course-steps 2 certification 1 lab-steps 1
play-arrow
Start course
Overview
DifficultyBeginner
Duration54m
Students520
Ratings
5/5
star star star star star

Description

Course Description

Machine learning is a notoriously complex subject, which usually requires a great deal of advanced math and software development skills. That’s why it’s so amazing that Azure Machine Learning Studio lets you train and deploy machine learning models without any coding, using a drag-and-drop interface. With this web-based software, you can create applications for predicting everything from customer churn rates, to image classifications, to compelling product recommendations.

In this course, you will learn the basic concepts of machine learning, and then follow hands-on examples of choosing an algorithm, running data through a model, and deploying a trained model as a predictive web service.

Learning Objectives

  • Prepare data for use by an Azure Machine Learning Studio experiment
  • Train a machine learning model in Azure Machine Learning Studio
  • Deploy a trained model to make predictions

Intended Audience

  • Anyone who is interested in machine learning

Prerequisites

  • No mandatory prerequisites
  • Azure account recommended (sign up for free trial at https://azure.microsoft.com/free if you don’t have an account)

This Course Includes

  • 54 minutes of high-definition video
  • Many hands-on demos

 

Transcript

So far we’ve only been training models. Now we’re going to deploy a model so you can use it to make predictions on new data that comes in. We’ll use the iris example from the second lesson, so go to your list of experiments and choose it.

To deploy a model, click on “Set Up Web Service”. It has two options. The one we want is “Predictive Web Service”. It automatically makes some changes to your graph. It adds a “Web service input” module at the top because we need to allow data to be input from the web. It also adds a “Web service output” module at the bottom because it needs to return its predictions to the person or program that requested them.

It also removes the modules that were needed for training but aren’t needed for a predictive service. For example, it doesn’t need the Evaluate Model module because we only need it to make predictions, not evaluate its accuracy. Another example is that it doesn’t need to split the data into training and test datasets anymore. It also replaces the algorithm and model with the trained model we got from running the experiment. Really, all that’s left of the original graph is the trained model and the scoring module, which is needed to make the predictions.

Now that the new graph is ready, we have to click Run again because the graph is different from what it was before. Now click “Deploy Web Service” again. This time it takes you to a new page.

You can do two types of tests on the web service. The first is a test of the request/response service, which is where you give it an individual row of data and ask for a prediction. In this case, it will get us to manually enter the length and width data for a single iris and then it will predict which species it is. Click the Test button.

Here are the five columns that were in the original source data. We need to fill them in for a new instance of an iris. Are you wondering why it’s asking us to put in the class? After all, the whole point of this is that it’s supposed to predict the class, so it doesn’t make sense for us to enter it. Yeah, that’s pretty silly, but there doesn’t seem to be a way to get rid of it. You can put any value in there you want and it will just ignore it.

Enter 6, 3, 5, and 2 for the remaining four fields. Then click the checkmark. It comes back really quickly. You’d think that it would just return its prediction for the class, but it sends back a whole vector of numbers. It’s actually just the five fields we submitted, and then a 1 for its class prediction and then a 1 for its probability. If the probability is above .5, then it predicts a 1 and if it’s less than .5, then it predicts a 0. It tells you how confident the model is regarding its prediction. Since it assigned a probability of 1, it’s completely confident that this iris is of type 1.

If you only want it to return the prediction (and maybe the probability as well), then you can go back to the graph, save it as a new experiment, and add a “Select Columns in Dataset” module right before the “Web service output” module. Then launch the column selector and specify only the columns you want. Then run it and click “Deploy Web Service” again. Now it only returns the columns you specified.

The second type of test is on the batch service. This is where you submit a CSV file with many rows of data and it returns a CSV file with prediction and probability columns added to it. One way to try it is to download the original source data and then submit that to the batch service. It won’t be a good test of its predictive ability because it used that same data to train the model in the first place, but it will show you how the batch service works.

There’s one complication, though. The batch service tester expects a CSV file, but the sample iris data is in ARFF format, so you have to convert it to a CSV file first. It’s not too hard to do that, but instead of showing you the conversion, I’ll just show you the end result. Here’s the sample iris data in CSV format.

To submit it, you click “Browse” and select the CSV file. You also have to select an Azure Storage account where it can store the results. Then click “Test”. It’ll take a little while, so I’ll fast forward. Here’s the output file. You can see that it added two columns: Scored Labels and Scored Probabilities.

Unfortunately, since we changed the graph to remove the other columns, we’d have to combine this file with the original one to see the predictions next to the irises. Of course, if you were calling the batch service from an application instead of from the tester, then it would be easy to apply these predictions to the data that it submitted.

Speaking of which, how would you call the predictive web service from your applications? The easiest way to see how to do that is to go to the “Consume” tab.

As you can see, it’s also possible to call the predictive web service from Excel. To call it from your own application, use this API key and these URLs. There’s some helpful sample code below for both the request/response service and the batch service, in four different languages. Here’s where you put in your API key.

There’s also a dashboard that shows how many calls of each type have been made on the web service, along with the average compute time and job latency.

And that’s it for deploying a model.

About the Author

Students21624
Courses43
Learning paths29

Guy launched his first training website in 1995 and he's been helping people learn IT technologies ever since. He has been a sysadmin, instructor, sales engineer, IT manager, and entrepreneur. In his most recent venture, he founded and led a cloud-based training infrastructure company that provided virtual labs for some of the largest software vendors in the world. Guy’s passion is making complex technology easy to understand. His activities outside of work have included riding an elephant and skydiving (although not at the same time).