Introduction
Running an Experiment
Running a Training Script
Datastores & Datasets
Compute
Pipelines
Deploying the model
The course is part of this learning path
Learn how to operate machine learning solutions at cloud scale using the Azure Machine Learning SDK. This course teaches you to leverage your existing knowledge of Python and machine learning to manage data ingestion, data preparation, model training, and model deployment in Microsoft Azure.
If you have any feedback related to this course, please contact us at support@cloudacademy.com.
Learning Objectives
- Create an Azure Machine Learning workspace using the SDK
- Run experiments and train models using the SDK
- Optimize and manage models using the SDK
- Deploy and consume models using the SDK
Intended Audience
This course is designed for data scientists with existing knowledge of Python and machine learning frameworks, such as Scikit-Learn, PyTorch, and Tensorflow, who want to build and operate machine learning solutions in the cloud.
Prerequisites
- Fundamental knowledge of Microsoft Azure
- Experience writing Python code to work with data using libraries such as Numpy, Pandas, and Matplotlib
- Understanding of data science, including how to prepare data and train machine learning models using common machine learning libraries, such as Scikit-Learn, PyTorch, or Tensorflow
Resources
The GitHub repo for this course, containing the code and datasets used, can be found here: https://github.com/cloudacademy/using-the-azure-machine-learning-sdk
So now that we've created a pipeline and verified our WEX, we can publish it as a REST service. So we can do that by invoking publish on our pipeline object, specify the name, Diabetes_Training_Pipeline, provide a description, and then version information. We can get hold of the rest_endpoint and have a look at the details of the rest_endpoint. To use the endpoint, client applications will need to make a REST call of HTTP. So this request must be authenticated.
So an authorization header is required or our application would require service principal with which to be authenticated. But to test this out, we'll use the authorization header from our current connection to our zero workspace which we can get use in the following code. So that's how we get that information. So this puts us in a position where we can call the REST interface.
The pipeline run asynchronously. So we would get an identifier back, which we can use to track the pipeline experiment as it runs. So that's that. Right, so for our request posts, we will pass in a rest_endpoint, the authentication header information, as well as the experiment_name. And since we have the run ID, we can use the run details we get to view the experiment as it runs.
Now, the pipeline should complete quickly and that's because in a prior step we specified that each step was to be able to use our prior reuse. We set it up so we use prior reuse. So this was done primarily for convenience at the same time in this example but in reality, you would likely want the first step to run every time in case the date has changed and trigger subsequent steps only if the output from the first step changes. So this is how we use the run ID information to get the details of the run as it runs.
Kofi is a digital technology specialist in a variety of business applications. He stays up to date on business trends and technology and is an early adopter of powerful and creative ideas.
His experience covers a wide range of topics including data science, machine learning, deep learning, reinforcement learning, DevOps, software engineering, cloud computing, business & technology strategy, design & delivery of flipped/social learning experiences, blended learning curriculum design and delivery, and training consultancy.