Introduction
Running an Experiment
Running a Training Script
Datastores & Datasets
Compute
Pipelines
Deploying the model
The course is part of this learning path
Learn how to operate machine learning solutions at cloud scale using the Azure Machine Learning SDK. This course teaches you to leverage your existing knowledge of Python and machine learning to manage data ingestion, data preparation, model training, and model deployment in Microsoft Azure.
If you have any feedback related to this course, please contact us at support@cloudacademy.com.
Learning Objectives
- Create an Azure Machine Learning workspace using the SDK
- Run experiments and train models using the SDK
- Optimize and manage models using the SDK
- Deploy and consume models using the SDK
Intended Audience
This course is designed for data scientists with existing knowledge of Python and machine learning frameworks, such as Scikit-Learn, PyTorch, and Tensorflow, who want to build and operate machine learning solutions in the cloud.
Prerequisites
- Fundamental knowledge of Microsoft Azure
- Experience writing Python code to work with data using libraries such as Numpy, Pandas, and Matplotlib
- Understanding of data science, including how to prepare data and train machine learning models using common machine learning libraries, such as Scikit-Learn, PyTorch, or Tensorflow
Resources
The GitHub repo for this course, containing the code and datasets used, can be found here: https://github.com/cloudacademy/using-the-azure-machine-learning-sdk
We've used the generic estimator class to run the training script. But what we can also do is take advantage of framework-specific estimators that include environment definitions for common machine learning frameworks.
In this case, we will be using scikit-learn so you can use SQL estimator. This means that you don't need to specify the scikit-learn package in the configuration. So a note that once again, the training environment uses a new environment, which must be created the first time it is run. So it takes a little bit longer for that to take place. So by using a frame with specific one here, SKLearn, we specify the source directory, our entry script, and then specify script parameters for our regularization hyper parameter.
Our computation has been done locally. We create an experiment. We need to provide our workspace details and the name of our experiment. Next, we submit the experiment. We can show the run details while running the experiment and print the metrics that have been logged to the file are sent to the output folder.
Kofi is a digital technology specialist in a variety of business applications. He stays up to date on business trends and technology and is an early adopter of powerful and creative ideas.
His experience covers a wide range of topics including data science, machine learning, deep learning, reinforcement learning, DevOps, software engineering, cloud computing, business & technology strategy, design & delivery of flipped/social learning experiences, blended learning curriculum design and delivery, and training consultancy.