1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Implementing Image Classification by Using the Azure Custom Vision Service

DEMO: Evaluating a Custom Vision Model

Contents

keyboard_tab
Introduction & Overview
1
Introduction
PREVIEW1m 45s
Summary
10
Summary
1m 46s
Start course
Overview
Difficulty
Intermediate
Duration
49m
Students
24
Ratings
5/5
starstarstarstarstar
Description

This course explores the Azure Custom Vision service and how you can use it to create and customize vision recognition solutions. You'll get some background info on what the service is before looking at the various steps for creating image classification and object detection models, uploading and tagging images, and then training and deploying your models.

Learning Objectives

  • Use the Custom Vision portal to create new Vision models
  • Create both Classification and Object Detection models
  • Upload and tag images according to your requirements
  • Train and deploy the configured models

Intended Audience

  • Developers or architects who want to learn how to use Azure Custom Vision to tailor a vision recognition solution to their needs

Prerequisites

To get the most out of this course, you should have:

  • Basic Azure experience, at least with topics such as subscriptions and resource groups
  • Basic knowledge of Azure Cognitive Services, especially vision-related services
  • Some developer experience, including familiarity with terms such as REST API and SDK
Transcript

So we're back here in Custom Vision Portal, and our model was actually quite successful, detecting the difference between the Empire State Building and the Eiffel Tower with 100% certainty for Precision, Recall, and AP. That's pretty good. If I scroll down here on the page, I can also see the individual scores for each tag, as well as the negative tag. This helps me identify if there's one or two classes that I might need to work a bit more on.

Note that these bars are in red. If I mouse over one of them, you see that Custom Vision is telling me that, ideally, I should have at least 50 images per tag for optimal performance. On the top-left of the page, notice that they also have the probability threshold, which is currently at 50%. Let's bring this all the way to 100%, and you see that this brings down the recall, as the model will be less likely to detect a tag, unless it's 100% confident of it. Theoretically, this should also increase my Precision metric, but as it was already at 100%, there's no further improvement in this case. Likewise, if I bring it all the way to zero, now Precision becomes very low. But 50% seems to be the sweet spot for this model, so let's put it back to that.

If I want to test this model, I can just click here on the "Quick Test" button. This opens a new dialog box, where I can either upload a picture or use a URL. As you have already seen the upload option many times, let's paste the URL of the picture of the Empire State Building from its Wikipedia article, and click on the arrow to test it. The model returned the Empire State Building with 99.9% probability, which is pretty good. I can also alternate between iterations of the model, in case I have many of them, so that I can understand whether or not my latest changes are improving the results. Let's close this dialog box, and switch to the Predictions tab.

Here we will have all the predictions made by this model, either using the Portal or the API. And here's the Empire State picture that we have just uploaded and tested. What is fantastic about Custom Vision, is that it allows me to use these past predictions to improve performance. 

Let's do this; I'll click on the image, add Empire State as the tag, and click on "Save and Close." The next time this model is trained, this picture will be used for further improvement of the results, pretty cool, huh? I am now happy with the results, so it's time to publish my model. Let's switch back to the Performance tab, and click here on the Publish button at the top. I'll rename my model to "Latest," make sure my prediction resource is selected, and then click on Publish. After a few seconds, the Publish button switches to "Unpublish," and I now have a "Prediction URL" button enabled for me, which I can use to consume my model through an endpoint, see? Let's see how to consume and export these models next.

About the Author
Avatar
Emilio Melo
Instructor
Students
1045
Courses
3

Emilio Melo has been involved in IT projects in over 15 countries, with roles ranging across support, consultancy, teaching, project and department management, and sales—mostly focused on Microsoft software. After 15 years of on-premises experience in infrastructure, data, and collaboration, he became fascinated by Cloud technologies and the incredible transformation potential it brings. His passion outside work is to travel and discover the wonderful things this world has to offer.