image
Consuming and Exporting Models
Start course
Difficulty
Intermediate
Duration
49m
Students
269
Ratings
5/5
Description

This course explores the Azure Custom Vision service and how you can use it to create and customize vision recognition solutions. You'll get some background info on what the service is before looking at the various steps for creating image classification and object detection models, uploading and tagging images, and then training and deploying your models.

Learning Objectives

  • Use the Custom Vision portal to create new Vision models
  • Create both Classification and Object Detection models
  • Upload and tag images according to your requirements
  • Train and deploy the configured models

Intended Audience

  • Developers or architects who want to learn how to use Azure Custom Vision to tailor a vision recognition solution to their needs

Prerequisites

To get the most out of this course, you should have:

  • Basic Azure experience, at least with topics such as subscriptions and resource groups
  • Basic knowledge of Azure Cognitive Services, especially vision-related services
  • Some developer experience, including familiarity with terms such as REST API and SDK
Transcript

We are ready to use our models, so let's see how to consume them from either the REST API or from exported Edge formats. Consuming your model is actually quite simple, once your model is published, a REST endpoint will be available for prediction requests.

To access this API, you just need the following information, which you can quickly obtain from the Custom Vision Portal; the project ID for your Custom Vision project, the model name you set when publishing, Latest, in our case, the prediction endpoint, which is the HTTP address used to access the API, and the prediction key, which is the authentication secret, similar to a password, used to prevent unauthorized access. 

Alternatively, you can also use language-specific SDKs, which wrap the REST API calls into functions that you can call from your code. There are SDKs available for .NET, Python, Java, Node.js, and Go. The response will be a JSON payload, with the following information; a header with general information such as the project and iteration IDs, and an array with several predictions, each one containing; the probability score, a number ranging from zero to one, as we have seen a few times, the name and ID of the tag predicted, a bounding box with the coordinates of the object found, in case this is an object detection model, and the tag type. The tag type can be negative or regular. Regular would be one of the tags that you have created, such as Empire State Building or Eiffel Tower.

Another option is to export these models to the Edge, such as an iOS, Android or IoT device, or even to a container. This provides the following advantages; it will run locally on the device, with no need to make a REST call. This is important if your device is constantly working offline, such as a drone flying on a farm, with little network connectivity. It will most likely have a faster response, also because it doesn't need REST API calls. This should give a better user experience on an iPhone or Android phone, for example. This might also be slightly cheaper. However, as Custom Vision is a quite cheap Azure service, cost will probably not be a major consideration here.

But wait a second, if running on the Edge is potentially faster, cheaper and more independent, why not use an Edge option always instead? Well, there are disadvantages as well. To run Edge models, you need to use one of the compact domains. These domains are optimized for faster response on a smaller device, which might make your models less accurate. If accuracy is your main goal, then you should use the REST API instead. It might also be the case that your images are already in the Cloud.

For example, assume an architecture where the images are staged in Blob Storage, and processed in a batch by an Azure function or Azure Data Factory. Because the images are already in an Azure data center, the Cloud-to-Cloud communication will probably be the best option, anyway. There are six options available when exporting models, TensorFlow, which is the format for Android devices. You can also export to TensorFlowJS, which allows you to call the model from JavaScript frameworks, such as React, Angular or Vue. This allows you to use the model from a website, as well as being compatible with both iOS and Android. 

Then we have CoreML, which is the preferred format for iOS devices. Another option compatible with both Android and iOS, as well as Windows ML, is ONNX, or Open Neural Network Exchange. Also, Microsoft has released the Vision AI developer Kit, which runs on a Qualcomm device, and helps considerably with simplifying the use of vision solutions. You can even export to OpenVINO, a deep learning toolkit developed by Intel that is optimized for their processors.

Finally, you can also export the models to a Docker container, for both Windows, Linux, and ARM architectures. As you can see, there's no shortage of options if you want to deploy your vision solutions to the Edge. 

Let's now see how to consume and export these models for production use.

About the Author
Students
2720
Courses
4

Emilio Melo has been involved in IT projects in over 15 countries, with roles ranging across support, consultancy, teaching, project and department management, and sales—mostly focused on Microsoft software. After 15 years of on-premises experience in infrastructure, data, and collaboration, he became fascinated by Cloud technologies and the incredible transformation potential it brings. His passion outside work is to travel and discover the wonderful things this world has to offer.