image
Deploying a Model as a Web Service
Start course
Difficulty
Intermediate
Duration
1h 23m
Students
1346
Description

Learn how to operate machine learning solutions at cloud scale using the Azure Machine Learning SDK. This course teaches you to leverage your existing knowledge of Python and machine learning to manage data ingestion, data preparation, model training, and model deployment in Microsoft Azure.

If you have any feedback related to this course, please contact us at support@cloudacademy.com.

Learning Objectives

  • Create an Azure Machine Learning workspace using the SDK
  • Run experiments and train models using the SDK
  • Optimize and manage models using the SDK
  • Deploy and consume models using the SDK

Intended Audience

This course is designed for data scientists with existing knowledge of Python and machine learning frameworks, such as Scikit-Learn, PyTorch, and Tensorflow, who want to build and operate machine learning solutions in the cloud.

Prerequisites

  • Fundamental knowledge of Microsoft Azure
  • Experience writing Python code to work with data using libraries such as Numpy, Pandas, and Matplotlib
  • Understanding of data science, including how to prepare data and train machine learning models using common machine learning libraries, such as Scikit-Learn, PyTorch, or Tensorflow

Resources

The GitHub repo for this course, containing the code and datasets used, can be found here: https://github.com/cloudacademy/using-the-azure-machine-learning-sdk 

Transcript

Now that we have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. The model could be used in a production environment, such as a doctor surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, we will deploy the model as a Web Service. But first let's determine what models we have registered in the Workspace.

So we can see multiple versions of our diabetes model. And then if we want to get the model that we want to deploy, we can do this by default. If we specify a model name, the latest version will be returned. So next, we're gonna create a Web Service to host this model and this requires some code and configuration files. So let's create a folder for those.

So I thought the name is diabetes service. With that created we can go about setting up a script, because a web service where we deploy the model, would need some Python code to load the input data, get the model from the workspace and generate and return predictions. So we'll save this code in the entry script score diabetes.

So let's import the required modules. Now we need to define two functions, One invoked when the service is loaded, which is a init function? And then the other one is called when a request is received. So with the init function, we use that to get the path to the deployed model file, and then we load it. And then with our run function, when it's called we get the input data as a numpy array, and then we get the prediction from the model. And then we get a corresponding classname for each prediction.

So either zero or one, and then we return the predictions as JSON, the web service will be hosted in a container and a container will need to install any required Python dependencies when it gets initialized. So in this case our scoring code requires scikit-learn. So we'll create a yml file that's, tells a container host to install this into the environment.

So lets import CondaDependencies and then add the dependencies for our model, we need AzureML defaults, which is already included. So here we add scikit-learn and then we save environment configuration as a yml file. We can then print yml file and we can see some details below. So now we are ready to deploy.

We will deploy the container as a service named diabetes service. Deployment process includes the following steps. We first need to define an inference configuration, Okay? And that includes a scoring environment files required to load and use the model. So the inference configuration, we need to specify the runtime, which is Python, the source directory, entry script which has a score diabetes, a script file we created earlier, and then a yml file as the conda file we'll be working with.

Next, we need to define a deployment configuration that defines the execution environment in which the service will be hosted. So in this case, azureml container service. And then next we deploy the model as a web service. And then we verify the status of the service of the deployed service, and that is healthy.

So if for any reason we'd need to troubleshoot what went on with the deployment, you can use the following code, and that allows us to check the status and get the service logs to help us troubleshoot. And you can take a look at your workspace in Azureweb interface and view the end points page, which shows the deployed service in your Workspace. You can also retrieve the names of the web services in your Workspace by running the flowing code. And we've got the diabetes service, the active one.

About the Author
Students
1346
Courses
1

Kofi is a digital technology specialist in a variety of business applications. He stays up to date on business trends and technology and is an early adopter of powerful and creative ideas.
His experience covers a wide range of topics including data science, machine learning, deep learning, reinforcement learning, DevOps, software engineering, cloud computing, business & technology strategy, design & delivery of flipped/social learning experiences, blended learning curriculum design and delivery, and training consultancy.