image
Adding Intelligence - Unlocking New Insights with AI & Machine Learning

Contents

Adding Intelligence - Unlocking New Insights with AI & Machine Learning
1
Main Presentation
PREVIEW44m 15s

The course is part of this learning path

Main Presentation
Difficulty
Intermediate
Duration
44m
Students
122
Ratings
5/5
starstarstarstarstar
Description

For many scenarios, the cloud is used as a way to process data and apply business logic with nearly limitless scale. However, processing data in the cloud is not always the optimal way to run computational workloads, either because of connectivity issues, legal concerns, or because you need to respond in near-real-time with processing at the Edge.

This session dives into how Azure IoT Edge can help in this scenario by training a machine learning model in the cloud using the Microsoft AI Platform and deploying this model to an IoT Edge device using Azure IoT Hub.

By the end of this course, you will understand how to develop and deploy AI & Machine Learning workloads at the Edge.

Learning Objectives

  • Learn what machine learning is and how it can be implemented in IoT scenarios
  • Learn how to create a classifier using the Custom Vision Service and run that classifier in an IoT Edge solution

Intended Audience

This course is intended for anyone looking to improve their understanding of Azure IoT and its benefits for organizations.

Prerequisites

To get the most out of this course, you should already have a working knowledge of Microsoft Azure.

 

 

Transcript

Welcome to the presentation, Adding Intelligence, Unlocking New Insights with AI and Machine Learning. My name is Henk Boelman, Senior Cloud Advocate at Microsoft focusing on applied AI. In this session, we're going to cover what machine learning is, and the ways we can implement this in IoT scenarios, going to create a classifier using the Custom Vision Service, and run that classifier in an IoT Edge solution. We will begin with an introduction scenario, which we'll use as a reference throughout this presentation.

In our role today, there are many different creatures that are on a mission to make our lives miserable. I believe every country will have their own little enemies. But in America, there is this big problem with raccoons messing up the trash in the streets. I think we can all sympathize with the American nation that this is a real problem. One day, Karesa thought enough is enough. Raccoons, no more trashing my garbage. And she started building a Trash Panda Defense System so she could sleep again at night.

She started drawing and came up with the following design. When the camera detects the raccoon on the trash, the alarm should go off scaring away the raccoon. So she went to the store and bought a raspberry pi, a camera, a lamp because she didn't want to make noise for the neighbors, and she got an Azure subscription. Now that you bought all the components, she had to start thinking about how to build the solution. To solve this problem with a camera, she needs to build something that can look at an image and detects if there is a raccoon in the image or not. And this is a problem that is hard to solve with traditional programming, but fairly easy with machine learning.

First, it is important to understand what machine learning actually is. The most common definition is, that it is giving your computer the ability to learn without explicitly programming it. When I'm programming, I would write an algorithm, run data through that algorithm and get an answer. But with machine learning, this is switched. Instead of creating the algorithm, I let a computer create this algorithm for me by providing a data with answers as input. In machine learning world, the algorithm is called a model. And what we can do now is we can run new data through this model and get a prediction about the input.

For our Trash Panda Defense System, we would have to create a classification model. You can see this as a function that looks at an incoming video frame and gives a prediction back if there is a raccoon on the image. And to create this model, we need three things. We need a lot of images with raccoons on it, we have to select a training algorithm, and we need an environment where we can run this training algorithm. These three things will deliver us a model that we can later use in our Trash Panda Defense System. But what if suddenly unicorns become a problem? If that would be the case, she would need to retrain a model so it can also recognize unicorns. And this is done by adding images of unicorns to the data set and train the model again.

Now that we understand what we have to create, let's have a look which tools are available on Azure. First, there are the domain specific pre-trained models. These models are created, run, and maintained by Microsoft and exposed to you through an API in Azure. The only thing you have to do to access these models would be going into the Azure portal and create one you like. There are around 40 different models available, divided in four main areas. There is vision to make your application see the worlds, there is speech to make your application talk and listen to you, and language to understand what is being spoken, and knowledge to give your application a brain. But if you dive deeper and you want to create your own models, you can bring your own tools and frameworks and use services like Azure machine learning, or create a machine learning virtual machine to boost your productivity.

And last but not least, there is a lot of powerful computer available that makes the training of your model fast and reliable. To create our raccoon classification model, there are a few options available. We could use a computer vision API. This is an API that is pre-trained on a large dataset and can classify most common objects. A really nice bonus is, that it can generate a description of what it sees on the image and tells you where the found objects are.

Besides detecting objects, it also has an OCR function. It can read handwriting and detect celebrities on the image. This is a very easy way to get started with vision, but not all objects are detected with the computer vision API. And sometimes you just want to create something that can just detect debt-specific objects. Like in our case, we want to be able to detect raccoons and unicorns. For this scenario, you can use the custom vision service. This is a service that helps you easily build models that can perceive particular objects. This surface comes with a user-friendly interface that walks you through developing and deploying custom computer vision models. Then either use the API to quickly predict images or export a model to a device to run a real-time image understanding.

If you want to control the complete life cycle of your model, you can use Azure machine learning. This service helps you accelerate the end-to-end machine learning lifecycle, and empowers developers and data scientists with a wide range of productive experiences for building, training, and deploying machine learning models faster, and accelerates your time to market, and enable steam collaboration with industry leading MLOps, DevOps for machine learning. The platform is secure and designed for responsible ML.

To use the custom vision service, you will need to create a custom vision training and prediction resource in Azure. Create a new resource, select AI and machine learning, and select custom vision. Click create. We want to create both the training resource. This is the resource that will train the model, and we want to have the prediction and point. We don't need the prediction end point for the Trash Panda Defense System, but in this demo, I want to show you how to use this.

Select a resource group, give your resource a name, select for both end points, the location closest to you. We are now choosing for the S zero tier, which you can try it out for free using the free tier. Click review and click create. You can also create these resources through the Azure CLI or by using ARM templates. Let's open the custom vision portal. Here we can create a model through a visual interface. Everything you see in this demo can also be done by using the API or with one of our SDKs.

To create your project, select New Project, enter a name and a description for the project, then select a resource group. If you're signed in account that's associated with an Azure account, the resource group dropdown will display all of your Azure resource groups that include a custom vision service resource. Select classification on the project types. Then under classification types, use multi-class. Multi label classification applies any number of your text to an image, zero or more, while multi-class classification sorts images into single categories. Every image you submit will be sorted into the most likely tag. You'll be able to change the classification type later if you want to.

Next, select one of the available domains. Each domain optimizes the classifier for a specific type of images. You can change the domain later if you wish. General, optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or if you're unsure of which domain to choose, select a generic domain. Food, optimized for photographs of dishes, as you would see them on a restaurant menu. If you want to classify photographs of individual fruit or vegetables, use this domain. Landmarks, optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photo. This domain works even if the landmark is slightly obstructed by people in front of it. Retail, optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain. Now finally, there are compact domains. These domains are optimized for real-time classification on mobile devices. The model generated by compact domains can we export it to run locally. Because we are going to run this model on a raspberry Pi, we're going to choose general compact. Click create project.

Now, we have to choose our training images. As a minimum, it is recommended you use at least 30 images per tag in the initial training set. You also want to collect a few extra images to test your model once it is trained. In order to train your model effectively, use images with facial variety, select the images that can vary by camera angle, lighting, background, and facial style. I took out bit our Trash Panda for a photo shoot and got photos from different angles with different backgrounds. Click the add images button, and then browse local files. Select open to move to tagging. Your text selection will be applied to the entire group of images you have selected to upload. So it is easier to upload images in separate groups, according to their desired texts. You can change the text for individual images after they've been uploaded.

Our classifier, it's going to have two tags, bit or not bit. So I've created a group of images that not contain bit but other objects like a can or a robot. I also add these images and tag them with a negative tag. To train the classifier, select a train button. The classifier uses all the current images to create a model that identifies the visual quality of each tag. After the training has completed, the model's performance is estimated and displayed. The custom vision service uses the images that you submitted for training to calculate precision and recall using a process called K fault cross-validation. Precision and recall are two different measurements of the effectiveness of your classifier. Precision. This indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as bit and 99 of them were actually bit, then the precision would be 99%.

Recall indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of bit and the model identified 80 images as bit, the recall would be 80%. The probabilities lie on the left pane of the performance step. This is the level of confidence that a prediction needs to have in order to be considered correct for the purpose of calculating precision and recall.

Now that we have finished training the model, we can test it to see if it can recognize a bit, our Trash Panda with images that our model has never seen. Custom vision offers an easy interface to do this. In the top right, you'll find a quick test button. When you click on this, you can test the model. You have two options here. Provide an image URL in the URL field, and if you want to use a local store image instead, click the browse local files button and select an image file. The image you select appears in the middle of the page. Then the result appears below in the image in the form of a table with two columns, labeled tags and confidence. The images you send to your model can be used to retrain your model. You can find those images under the tab predictions.

Here you see this same images as used for the test. To add an image to your training data, select the image, select the tag, and then select save and close. The image is removed from predictions and added to the training images. You can view it by selecting the training images tab. You can click train again to retrain your model with the new images.

Before we continue with the demo, let's talk a little bit about how we can improve a classifier. The quality of our classifier depends on the amount, quality, and variety of the label data we provided, and how balanced the overall dataset is. A good classifier has a balanced training dataset that is representative of what will be submitted to the classifier. The process of building such a classifier is iterative. It is common to take a few training routes to reach the expected results. A general pattern that will help you build a more accurate classifier is first, a general training round. We add more images and balance the data, we retrain, then we add again some more images with varying backgrounds, lighting, object size, camera angle, and style, and we retrain it again. Four, we use new images to test predictions, and then we can modify existing training data according to those prediction results.

So one thing we want is to prevent over-fitting. Sometimes a classifier will learn to make predictions based on things images have in common. For example, if you're creating a classifier for apples, for your sites and you have used images of apples in hands and citruses on white plates, the classifier might give an undue importance to hands verse plates rather than apples via citrus. To prevent this from happening, use the following guidance.

Data quantity. The number of training images is the most important factor. We recommend using at least 30 images per label as a starting point. With fewer images, there is a higher risk of over-fitting. And while your performance numbers may suggest good quality, your model may struggle with real world data. Also important to consider is the data balance. For instance, using 500 images for one label and 50 for the other one makes an imbalanced training dataset. This will cause the model to be more accurate in predicting one label than the other, this will cause the model to be more accurate in predicting one label than the other. You're likely to see better results if you maintain at least a one to two ratio between the label with the fewest images and the label with the most images. For example, if your label with the most images has 500 images, the label with the least images should have at least 250 images for training.

Let's talk about data variety. Be sure to use images that are representative of what will be submitted to the classifier during normal use. Include a variety of images to ensure that your classifier can generalize well. Let's look at a few things that can make your dataset more diverse. Backgrounds. Provide images of your objects in front of different backgrounds. Photos in natural backgrounds are better than photos in front of neutral backgrounds, hence they provide more information for the classifier. Lighting. Provide images with a variety in lighting, especially if your image is used for prediction, have different lighting settings. It is also helpful to use images with a variety of saturation, hue, brightness. Object size. Provide images in which the object vary in size and number. For example, a photo of a bunch of bananas and a closeup of a single banana. Different sizing helps the classifier generalize better. Camera angle. Provide images with different camera angles. Style, provide images of different styles of the same class. For example, different variety of the same fruit. However, if you have objects of drastically different styles, we recommend you label them as separate classes to better represent their distinct features.

Back to the demo. Let's add some more diverse images of our Trash Panda bit to our training dataset. We go to training images and click add images, select 60 more images with bits on it, and select 60 more with not bit on it. You might've noticed that I didn't take my new images. So these images can now be found under untagged images. We're going to use our model we have just trained to take these images for us. Click on suggested tags and click get started. It can take a few minutes before it has tagged all the images.

When the tagging is done, click on the tag, take a look if the tags are correct, and click confirm tags. Now these images are added to our training dataset and we can click on train to retrain our model. For our Trash Panda Defense System, we are going to export the model. You can export every iteration of your model. To export your model, go to the performance tab and click on the iteration you want to export. In the top, you can click export. Here you can choose a TensorFlow model which will run on Android, a Core ML model for iOS 11, and Onyx for Windows ML. You can also get a Docker container for Windows, Linux, or ARMRC texture. The container includes a TensorFlow model and go to self host a custom vision API.

Another way of using your model is to submit images to the prediction IPI. You will first need to publish your iteration for prediction, which can be done by selecting publish, and specify a name for the published iteration. This will make your model accessible to the prediction API of your custom vision Azure resource. When you have published your iteration, you can look up the details and start submitting images to the end point.

Let's start posting and submit an image to the end point and take a look at the response that the API sends back. Copy the URL and the prediction key, and select the image we want to classify. When we look at json, we see that the model running and our prediction endpoint has classified the image as an image containing bit. In this demo, we have seen how to create a custom vision training and prediction endpoints in the Azure portal, seeing how we can upload images, take down, and train a model, we learned what the different metrics mean like precision and recall, we talked about how you can improve your model and prevent over-fitting, retrained our model by adding more diverse images and used the first iteration of our model to auto tag the images, and took a look at how we can export a model so we can use it in a mobile app, and in our Trash Panda Defense System. We also learned how we can publish the model to our prediction end point and use it as an API. Now that we have created our raccoon classification model, it is time to find out how we can implement this model in our Trash Panda Defense System.

First let's zoom out and look at we have to create. We would have this thing that generates data. That would be the camera that generates frames. Then we need to analyze the data that will be running the frames through the model. And finally, we have to take an action. In our example, it would be turning the lamp on or off. And there are multiple ways of doing this. And we're going to zoom into two of these scenarios. The first scenario would be generating the data on the IoT device, analyze the data in the cloud, and take the action on the device. In our case, the Trash Panda Defense System would send every frame to the cloud, run the frame through the model, and send a prediction back and the device would turn the lamp on or off. This scenario enables us to quickly tune the model in the Cloud because it takes a long time before the lamp is turned on or off, and sending all the frames to the Cloud requires a fast internet connection.

Another approach would be running the model on the device itself. A common misconception is that you would need a very powerful computer to run a model, but for simple models like this, you can run it on a raspberry Pi. However, training the model is very compute heavy and can not be done on the raspberry Pi and is highly recommended to do in the Cloud. In this scenario, the video frames are not leaving the device and are processed locally, meaning the device could even function without a connection to the internet and act without any delay on the outcome of the model. We could even add something that would only send the output of the model to the cloud so we can create a Trash Panda Defense system control center. Both of these approaches have their own pros and cons. Let's have a look at a few of them.

If you look at IoT in the cloud, this is really good if you don't need a real-time action being performed on the device itself. Like for instance, remote monitoring and management. Also in the Cloud, you have access to infinite compute and storage, to train compute intensive AI models. The biggest advantage of running IoT on the edge is next to the low latency for real-time response is that you can pre-process the data on the device itself. Meaning that the video feed from your camera or any other data generated by the device never have to leave the device. And this is a good thing if you have heavy security and privacy requirements. 

The best scenario for our Trash Panda Defense System is that we are going to build, test, and manage our solution in the Cloud and deploy the solution to our raspberry Pi device to run locally. This will give us the flexibility and power of the cloud to train the model, the low latency of running the model on the device to scare the raccoon away fast, because everything is processed locally on the device. It is not taking up any bandwidth and the device could even be placed in a location where there is no internet available.

Now that we have defined our application strategy, we can start building and deploying it. To build and deploy it, we're going to take a look at Azure IoT Edge. Azure IoT Edge is a fully managed service built on the Azure IoT Hub. The service enables you to deploy your cloud workloads to run on an internet of things edge device, via standards containers. It works with Linux and Windows devices that support container engines. Their runtime is free and open source under the MIT license, IoT Edge runs Docker compatible containers. In these containers, you can run your own business logic. And through the Cloud interface, you can manage and deploy the workloads to the device.

IoT Edge uses modules. For our Trash Panda Defense System, we would need three modules, a camera module that is taking care of the connection with the camera and extracts the frames from the camera feet. An AI module. In this module, we would run our machine learning model that has been trained in the Cloud. And last, a module that will handle the alarm. Every module is a separate doc container that's stored in an Azure container registry.

To deploy this module to the edge device, we need to create a deployment manifest. In this file, we specify where the modules are located and how they should communicate with each other. Via IoT hub, we can deploy this manifest to our connected devices. When the deployment is done, the modules run and communicate locally on the IoT Edge device. Let's zoom in a little bit deeper and take a look at what is running on the IoT Edge device.

On the device, the IoT Edge runtime is installed. By default, this comes with two modules. The Edge Agent that takes care of the modules and deployment and to Edge Hub, a local IoT hub that enables our modules to communicate with each other. We have our three modules that were installed through the IoT Edge deployment manifest. The camera module connects the camera and sends the frames to the custom AI module, the custom AI module put the model score on the local IoT Edge hub, the alarm module receives the module score from the Edge Hub. In the alarm module, if the model score reaches a certain threshold, the lamp is put on and a notification is put back on the Edge Hub, which is sent to the IoT Hub in Azure for monitoring purposes.

Now that we know what to build, let's actually implement a model we have created using custom vision in an IoT Edge solution and deploy it to this raspberry Pi device. Here you see a raspberry Pi four with the latest installed. It is connected to the power and I've connected it to the wifi and enabled SSH. So I can connect to it from my computer. I also connected to USB camera and connected some LEDs to the GPI opens on the raspberry boards.

To get started, we first have to set up two things in Azure. We need to create an IoT Hub and then Azure container instance. To create IoT Hub, go to the Azure portal and create a new resource. Go to internet of things and select IoT Hub. Select your subscription ID, create a new resource group, select the region closest to you, and give your hub a name. Click review, and create.

Next, we need to create an Azure container registry. In this registry, we're going to store our IoT Edge modules. To create the Azure container registry, go to containers, click on container registry. You can deploy it in the same research group as the IoT Hub. Give it a name, select the right region, and the standard sq is enough for what we are going to do. Click review and create. When the registry is created, we have to enable login with a username and password. Open the resource and click access keys. Enable here Admin user. The first step is now completed. We have created an IoT Hub and an Azure container registry. We've done this now by using the Azure portal, which you can also create these resources using the SSCLI or by creating an ARM template, which you can run from Azure DevOps.

Now that we have our resources set up in Azure, it is time to connect the raspberry pi to the IoT Hub. To do this, connect to your device over SSH. The Azure IoT Edge runtime is what turns the device into an IoT Edge device. First, you need to register a Microsoft key and a software repository feet, copy the generated list, and then store the public key.

Next, we need to install a container runtime. Azure IoT Edge relies on an OCI compatible container run time. For production scenarios, we recommend that you use the Moby based engine. The Moby engine is the only container engine officially supported with Azure IoT Edge. Docker container images are compatible with the Moby runtime.

First, we need to update our package list. Install the Moby engine, install the Moby command line interface, the CLA is useful for development but optional for production deployments. Now we can install the Azure IoT Edge security daemon. The IoT Edge security daemon provides and maintains security standards on the IoT Edge device. The daemon starts on every boot and bootstraps the device by starting the rest of the IoT Edge runtime. You first update the package list on our device. We check to see which versions of IoT Edge are available, and install the most recent version of the security daemon. The final thing we need to do on the device is to add the connection string to the configuration file. To get to connection string, we have to go back to the Azure portal and open our IoT Hub. We navigate in the left menu to the section automatic device management and open IoT Edge. Here we click on, add an IoT Edge device. We give it to name and click save. Click on the device and copy the primary or secondary connection string. Go back to the device to configure the security daemon. The daemon can be configured using the configuration file at /etc/iotedge/config.yaml. The file is right protected by default. You might need elevated permissions to edit it. Open the configuration file. Find the provisioning configurations of the file and uncommand the manual provisioning configuration section. Update the value of device connection string with the connection string from the IoT Edge device. Make sure any other provisioning sections are commanded out. After entering the provisioning information in the configuration file, restart the daemon.

All the infrastructure is now ready to run our Trash Panda Defense System modules. I've already created all the modules. So let's clone the repository and take a look what is inside. In this repository are a few important files.

First, there is the deployment.template.json. A deployment manifest is a json document that describes which modules to deploy and how the data flows between the modules. There is a folder called modules. In this folder, our three modules are located. The camera capture module takes care of camera. Image classifier service is running the machine learning model, and simple LED turns the lights on and off.

Let's take a look at a SimpleLed module. Every module has a Docker file. This file contains the configuration of the container, like which base image to use and which packages to install. Then there is the module json. This file holds the IoT Edge module configuration. In this file, we see variables like the container registry address. If you build IoT Edge solution, this is the registry where to store your containers. These variables, we can set in the .endfile, and we can retrieve these details from the container registry to the portal.

Let's open the Azure portal and navigate to the container registry, click on access keys, and copy the login server, the username, enter password. We are almost ready to build a solution, but first we need to get a model we have trained from the custom vision service. Let's open our project and go to the tab performance. There we click on export. We choose here for the Docker file and choose for the ARM Raspberry Pi file. Click download, open the zip file and copy the labels and model file to the image classifier app directory. We also need to install the IoT Edge extension in visual studio code. You can find this extension under the menu item extensions and search for IoT Edge.

When you have installed this extension, you should see Azure IoT Hub in the bottom in the Explorer. Here, you can connect visual studio code to your IoT Hub and view your devices. Click on the three dots and choose Select IoT Hub to connect. Because we have the IoT Edge extension installed, we can do a right click on the deployment template and click build and push IoT Edge solution. This command will build a Docker containers for the modules and will push them to our repository. This can take awhile. When all the containers are built and pushed, we can go to the ACR and view the repositories there. To deploy these modules to our IoT Edge device, we have to generate an IoT Edge deployment manifest. This can be done by right clicking on the deployment template and select generate deployment manifest. The deployment manifest will appear in the config folder. To deploy this manifest, you can right click on the manifest and select Create Deployment for Single Device.

In the top official studio code, you can select your IoT Edge device now. Select the device and wait for the deployment to finish. There are a few ways how you can feel if the deployment is successful. In visual studio code, we can use the extension to see which modules are running on the device. Or we can see it on the device itself by using the IoT edge command line, command iotedge list. Here you can see all our modules are up and running.

Now it is time to see it in action. Let's release Trash Panda bit to the stage and see if he can scare him away with our LED lights. That worked. No more Trash Pandas messing up my trash. This is the end of the IoT Edge demo. In this demo, we have seen how to create an Azure IoT Hub and Azure container registry in Azure, how to install IoT Edge on an edge device, learned about a project structure of an IoT Edge solution, built the IoT edge solution and created a deployment manifest, deployed the solution to a single IoT edge device, and saw that the lights scared away our Trash Panda bit.

In this session, you go peak in how you can use Azure IoT Edge and Azure AI to create a Trash Panda Defense System. We covered what machine learning is and which tools that are available in Azure. We talked about the difference between IoT in the Cloud and IoT on the edge, and showed you how to use Azure IoT Edge to manage your solution. For links to the relevant documentation. resources, demos used in this presentation, and the followup presentation, check out aka.ms/iot30/resources.

If you're interested in using this presentation and or this video recording for an event of your own, the materials can be found on GitHub at aka.ms/iot30. If you enjoyed this presentation and are interested in other topics covered in the IoT learning part, you can find them all at aka.ms/iotlp. We have covered quite a few topics in this session. And we'd like to share that we have curated the collection of modules on a Microsoft Learn platform, which pertain to the topics in this session. This will allow you to interact as we learn how to build intelligent edge applications and how to manage them using Azure IoT edge. There are also many modules on how to build AI models using tools like Azure custom vision, and Azure machine learning. This presentation and the associated learn modules can help guide you on a path to official certification. If you're interested in obtaining accreditation, this can help you stand as a certified Microsoft Azure IoT developer. We recommend checking out the AZ-220 certification. You can find details on the topics covered and schedule an exam today at aka.ms/iot30/certification. For more interactive learning content, check out Microsoft Learn at microsoft.com/learn to begin your own custom learning path with resources on the latest topics and trends in technology. Thank you again for attending this session.

About the Author

This open-source content has been provided by Microsoft Learn under the Creative Commons public license, which can be accessed here. This content, which can be accessed here, is subject to copyright.