image
Creating a Training Script

Contents

Start course
Difficulty
Intermediate
Duration
1h 23m
Students
1256
Description

Learn how to operate machine learning solutions at cloud scale using the Azure Machine Learning SDK. This course teaches you to leverage your existing knowledge of Python and machine learning to manage data ingestion, data preparation, model training, and model deployment in Microsoft Azure.

If you have any feedback related to this course, please contact us at support@cloudacademy.com.

Learning Objectives

  • Create an Azure Machine Learning workspace using the SDK
  • Run experiments and train models using the SDK
  • Optimize and manage models using the SDK
  • Deploy and consume models using the SDK

Intended Audience

This course is designed for data scientists with existing knowledge of Python and machine learning frameworks, such as Scikit-Learn, PyTorch, and Tensorflow, who want to build and operate machine learning solutions in the cloud.

Prerequisites

  • Fundamental knowledge of Microsoft Azure
  • Experience writing Python code to work with data using libraries such as Numpy, Pandas, and Matplotlib
  • Understanding of data science, including how to prepare data and train machine learning models using common machine learning libraries, such as Scikit-Learn, PyTorch, or Tensorflow

Resources

The GitHub repo for this course, containing the code and datasets used, can be found here: https://github.com/cloudacademy/using-the-azure-machine-learning-sdk 

Transcript

We're gonna use the Python script to train a machine learning model based on the diabetes data. We'll start by creating a folder for the script and data files, and we will need to import the os and shutil modules, create a folder for the experiment files and then using shutil, copy the data file into the experiment folder, we've just created. We are now ready to quit the training script and save it to the folder.

So, using the magic command, "write file", we write the script to the folder that we've created, and then we import the following libraries. We need a run, we need pandas, we need numpy, joblib will allow us to provide a pipe-lining with our Python code. We need train test split, our model, our metrics.

With our libraries imported, let's get our run context. Let's load the data we'll be using. And next we separate features and labels. So, we put features into x and then we have our labels with y. We then split our data into training set as well as a test set sample.

So 30 goes to a test and we have 70 for our training sample. Next, we set the regularization parameter to 0.01, and then we train our logistic regression model using these two parameters here. So we pass on our realization parameter to see, and then we've chosen a liblinear, as a default solver.

We calculate the accuracy, and then log that. We do the same for area under the curve as well. And then we log that information. Then let's go ahead and see the chain model in outputs folder. So we click the outputs folder, and then we use a dump from joblib to save our trained model, and then we complete our run.

About the Author
Students
1257
Courses
1

Kofi is a digital technology specialist in a variety of business applications. He stays up to date on business trends and technology and is an early adopter of powerful and creative ideas.
His experience covers a wide range of topics including data science, machine learning, deep learning, reinforcement learning, DevOps, software engineering, cloud computing, business & technology strategy, design & delivery of flipped/social learning experiences, blended learning curriculum design and delivery, and training consultancy.