The course is part of this learning path
This course explores strategies for secure IoT device connectivity in real-world edge environments, specifically how the use of the Azure IoT Edge Gateway can accommodate offline, intermittent, legacy environments by means of Gateway configuration patterns. We then look at implementations of Artificial Intelligence at the Edge in a variety of business verticals by adapting a common IoT reference architecture to accommodate specific business needs.
Finally, we will conclude with techniques for implementing artificial intelligence at the edge to support an Intelligent Video Analytics solution by walking through a project which integrates Azure IoT Edge with an NVIDIA DeepStream SDK module and a custom object detection model built using CustomVision.AI to create an end-to-end solution that allows for visualization of object detection telemetry in Azure services like Time Series Insights and Power BI.
Learning Objectives
- Understand the strategies for secure IoT device connectivity in real-world edge environments
- Learn how a common IoT solution architecture can be adapted to a variety of business verticals
- Learn techniques for implementing artificial intelligence at the edge to support an intelligent video analytics IoT solution
Intended Audience
This course is intended for anyone looking to improve their understanding of Azure IoT and its benefits for organizations.
Prerequisites
To get the most out of this course, you should already have a working knowledge of Microsoft Azure.
Hello and welcome to the main presentation for Get to Solutioning- Strategy & Best Practices when Mapping Designs from Edge to Cloud. My name is Paul DeCarlo, Principal Cloud Advocate at Microsoft and Lead for IoT Advocacy within our developer relations group. In this session, we will cover strategies for secure IoT device connectivity in real-world edge environments. How a common IoT solution architecture can be adapted to a variety of business verticals, and an introduction to techniques for implementing artificial intelligence at the edge to support an intelligent video analytics IoT solution. Let's go ahead and get started.
Secure transmission of data from devices in the field can be challenging depending on the business environment. What do you do if you have a need to create an IoT solution that operates on an offshore oil rig? How do you securely transmit data from devices that may have been installed over a decade ago? The good news is that service offerings from Microsoft consist really any type of edge environment, regardless of the network connectivity restraints or the legacy sensors that may be employed. Let's take a closer look at how this is all possible.
Security in IoT solutions is paramount. Internet of things implies the presence of devices often with access to mission critical controls, sensor data, and potentially access to real-time video and computer vision-based AI at the edge scenarios. Securing IoT solutions requires mitigation across all relevant data pathways beginning with the device itself. It's communication layer to both internal and external services, which typically involves connection to a cloud hosted environment and of course the cloud environment itself. Microsoft IoT services take security into account in all layers of the IoT solution, beginning with the device, it's pathway to the cloud, and within the Azure cloud itself.
Microsoft's Azure IoT Edge is a product offering that can assist in securing at the device level and during transport of telemetry to the cloud. This is a runtime service that is built on top of container technology where you define containerized workloads known as modules in the cloud that can then be deployed down to devices in the field. These modules support a variety of different languages. For example, Python, NodeJS .NET Core, Java and C as first-class targets. What's really nice about the service is that the runtime allows you to transmit your data to the cloud using low-latency, AMQP and MQTT transport protocols. And it can also be configured to either run on device or as a gateway to support additional downstream devices that speak to the gateway. It can also allow you to operate in offline, air-gapped or intermittent network environments. So you can think about if you have a use case where you need to deploy an IoT device on a shipping container that goes out to sea and perhaps loses internet connectivity somewhere along the way.
With Azure IoT Edge, you can still be obtaining telemetry and metrics producing real-time insights. And then when you eventually regained network connectivity, by say docking a port, you can then offload those cashed results to the cloud for archival purposes, or perhaps to reuse them in training a machine learning model in the cloud. The Azure IoT Edge runtime supports Linux on X64, ARM32 and ARM64 platforms in addition to Windows on X64. The entire project is open source and available on GitHub for your perusal.
IoT Edge devices are all backed in the cloud by a service known as the Azure IoT Hub. This is the high throughput ingestion point that is capable of receiving and forwarding device telemetry messages to other services within Microsoft Azure. Devices can either directly connect to the hub if they employ MQTT, AMQP or HTTP communication protocols. However, oftentimes devices connect to the IoT Hub by means of a field gateway that would be employed at the site of the edge environment. This can be accomplished via configuration options to support secure proxying from the internal network to the cloud or to receive telemetry from devices that do not support direct connectivity to an Azure IoT Hub.
The IoT Hub also provides a number of additional functions in the form of secure registration for downstream devices and abilities to command and control those devices using things known as cloud to device messaging, direct method in vocation or through the use of twins which can enforce desired state based on properties present in an IoT Edge modules application code.
Let's take another look at the Azure IoT reference architecture that was introduced in previous sessions.This is a common architectural pattern that describes IoT solutions built with Azure IoT services. We've divided the diagram into sections to show where the individual components would exist within a business environment. We are going to focus on the data pathway from edge devices to the cloud, specifically on how we can employ a gateway to ensure secure transmission of telemetry from the internal environment to the cloud. This is an important consideration because oftentimes we must account for network constraints at the edge in addition to legacy systems that may not support native capabilities for communicating with the cloud directly.
The first configuration we will look at is the Transparent Gateway Pattern. In this configuration, the Azure IoT Edge gateway acts as a pass through for all downstream edge devices and is capable of forwarding received messages to an Azure IoT Hub hosted either on premise or in the cloud. This pattern is useful in environments where downstream devices may be required to proxy through a single secure endpoint before it can be transmitted outside of the internal network.
The IoT Edge gateway brokers the actual connection to an Azure IoT Hub by forwarding device telemetry captured by the local gateway. This strategy is also beneficial and environments with limited bandwidth, for example, by allowing you to limit the type of telemetry to forward to the cloud by using aggregation or prioritization as opposed to say forwarding each and every message received by the gateway. The offline features of Azure IoT Edge can also be leveraged with this pattern. For example, you can continue to govern workloads running on IoT Edge devices in the absence of external network connectivity. You can also store and forward telemetry in situations where the network connection may be intermittent. For example, in the case of an offshore vessel.
Next we are going to look at the Protocol Translation Pattern. This pattern allows for translation of legacy data protocols into a secure endpoint for transmission to the cloud. The configuration is often used in environments that employ pre-existing sensors that aren't natively capable of supporting MQTT, AMQP or HTTP transport protocols. For example, many manufacturing and industrial plants employ centers which communicate via the object linking and Embedding for Process Control Unified Architecture or the short form OPC UA.
While smart cities may employ the building automation and control network or BACnet, Microsoft offers a variety of modules for enabling connection of these and other devices to an IoT Edge gateway by translating the original protocol into a format understood by the IoT Edge gateway. This allows you to retrofit your existing environments to support modern cloud offerings. This pattern does not supply an identity to the legacy devices being translated. However, we will see how that can be done in the next slide.
The next pattern is known as the Identity Translation Pattern. At its core, this pattern is very similar to protocol translation. The configuration allows for translation of legacy data protocols like OPC UA and BACnet with a key difference that downstream devices are treated as unique entities. And this allows for secure command and control at the device level. With this configuration, you can continue to produce telemetry from legacy protocols and interact with those devices using features like Clouds to Device Messaging, Direct Methods and Device or Module Twins to track state changes in your application against a value stored in the cloud.
To recap, the transparent gateway is useful for environments where downstream devices proxy through a single secure endpoint before they are transmitted to the cloud. This strategy can be beneficial in environments with limited bandwidth by governing messaging in an aggregated format. The pattern is also useful for ensuring the security of data leaving an internal network. The protocol translation pattern can allow for translation of legacy data protocols into a secure endpoint for transmission to the cloud. The configuration is often used in environments that employ pre-existing sensors that aren't capable of supporting MQTT, AMQP or HTTP.
Finally, the identity translation pattern is very similar to the protocol translation pattern and that it allows for translation of legacy data protocols as well. However, it treats those downstream devices as unique entities. And this allows for secure command and control at the device level. The Azure IoT Edge gateway patterns have shown that IoT device connectivity is possible within even the most challenging environments.
We're now going to look at some industry specific IoT solutions from an architectural perspective and demonstrate what the solutions might look like from an end to end IoT experience for end users. The cross-industry relevance of IoT solutions pervades every industry that can benefit from increased value, reduced waste or enhanced procedures via the introduction of real-time insights and automated systems that react to those insights.
We would now like to show you what some of those solutions might look like by taking a closer look at retail and workplace safety scenarios. Both of the demos that we will showcase in this section are available online and can serve as an excellent learning resource or even provide a baseline to re-implement and create solutions of your own. Let's begin by reviewing the Azure IoT reference architecture and see exactly what it looks like when applied to real world solutions.
As you've seen, IoT solutions can address a variety of different use cases. But it is interesting to note that in all of the examples, the pathway from device to the cloud to line of business applications is very similar. There are of course, variations of the specific technologies involved when encountering these solutions in the real world, but the workflow of using things or devices to capture and produce insights which lead to action is common to all IoT solutions. Once you understand this fundamental concept, you can apply it to a host of business scenarios to create relevant IoT solutions that make use of real-time insights.
Extending this concept further, we can produce a common architecture for IoT solutions built on Microsoft Azure by employing service offerings designed specifically to address this common workflow. Our things are really IoT devices which communicate using the secure Azure IoT device SDKs either directly to an IoT Hub or by means of an IoT Edge gateway. Once data arrives to the Azure cloud, we can begin to process and operate on the insights contained within using services like stream analytics to filter relevant information while the data is in flight. This allows us to extract time critical insights into warm path storage systems for immediate use or offloading into cold storage systems for archival purposes, or perhaps to facilitate a machine learning function in the cloud. Once our data is in the cloud, scalable integration of that data into line of business applications enables the ability to take action on the data produced by our devices.
Now, in the real world solutions can involve many moving parts, but the core of these solutions is always built around the Azure IoT Hub whenever Microsoft Azure is involved. The IoT Hub is the secure central point of ingestion for devices and provides a direct line of output into additional cloud services. For example, serverless offerings like Azure functions which are capable of post-processing IoT data, web apps that can consume telemetry for visualization or reporting, Azure Maps which can allow for custom interactions that map to real-world environments, Time Series Insights which is a service that can allow for viewing data over time, and Azure stream analytics which can operate on our data while it is in flight to aggregate and produce additional insights on data while it is in arrival.
The key concept that we want to leave you with is that once data arrives into an Azure cloud by means of an Azure IoT Hub, it opens the door for integration into a wide variety of service offerings that are designed to integrate directly with the output from an Azure IoT Hub. Solution builders can also benefit from the IoT central software as a service offering which works out of box with all of the aforementioned services. IoT central is essentially a turnkey production ready IoT solution that provides a high throughput backend based on an IoT Hub, as well as a front end service that allows you to visualize data, customized dashboards, provide multi-tenancy features, allow you to create rules with custom triggers and the ability to export your data to additional services like Azure storage and Event Hubs. We will briefly cover what this looks like in the upcoming demo.
Before we get started, let's go ahead and take a quick look at the architecture for our retail on the edge solution and know how it applies to the Azure IoT reference architecture. This is a multi-layered application that begins with a mobile app and the cellular phone that it runs on is treated as an IoT device. The app provides customers an ability to place orders from their device, but also allows us to know the customer's location which can be very useful for knowing when a customer may be in proximity to the store. For example, to inform frontline workers, then order should be prepared. And it can also allow the retailer the ability to provide customers with directions to the store from their current location. This feature may be particularly useful in malls, which may have multiple levels that can be difficult to navigate.
We also employ artificial intelligence at the edge with the use of a camera that is pointed at a stocking shelf, which is capable of counting the number of items currently in stock. This information can then be used to inform frontline workers that an item may be in need of immediate replenishment at the store level and can offer data to the store manager to assist them making a purchase order to obtain items that are currently in high demand. This data can also be stored off to be used later on for forecasting demand and perhaps to optimize future reordering schedules.
Let's walk through this solution interactively and demonstrate what this architecture might look like in action. The following demo, Intelligent Retail on the Edge is available on GitHub at aka.ms/iot50/retailontheedge. And may be reused for learning or as a baseline for your own custom solutions. Let's go ahead and take a look at it in action.
The first thing we're going to do is navigate to our Azure subscription and head over to the resource group that contains all of the services that comprise our retail demo IoT application. You'll notice here that our solution is comprised of various different Azure services. These include a storage account, which is used for cold storage of our telemetry from our IoT devices, which we'll later use to facilitate a machine learning studio project, and also a number of different serverless logic apps which will be used to transform our insights into events that we can share with frontline workers so that they can respond to activity happening within the retail environment. And of course, you'll also notice that we're utilizing some Azure Map services here as well, and those will be seen later on in our mobile application. Let's go ahead and jump into the IoT central application that's hooked up and taking in all of the telemetry from the IoT devices that are placed throughout the Contoso Market.
You'll notice that we're able to get temperature and humidity readings from devices in various different zones. And also we've got a section over here that indicates account. Now what this is doing, it's actually looking at a live video stream of inventory that's currently on our shelves. And you can see this is a live IoT Edge device that's running a module, that's performing that AI inference at the edge and producing that telemetry that we're then visualizing within IoT central. Let's go ahead and simulate that we're a user who's interacting with the Contoso Market mobile application.
I'm gonna go ahead and select an item here, the canned beans. We'll go ahead and add those to our cart, and proceed to check out. Now, as you might expect with any sort of mobile application we've placed an order, there's probably a database as decrementing and inventory and that sort of thing, but also what we're doing here is again, remember we've got that AI camera that's pointed at our stocking shelf. Well, since we placed that order, our frontline workers know that they need to go pick those items. And once they do that, that's gonna change what's currently in the video stream. So you'll notice that the items that we've placed have now been removed from the shelf.
Now, additionally what's happening here is that our AI inference is still running. And once it produces that notification that the quantity of items has changed, it will go ahead and fire off a trigger to call our logic app or our serverless application. And what this will do, it will actually notify our frontline workers through a Teams channel if it is noted that a particular item is experiencing low inventory.
So you'll see here's our flow. And down here, you can see the flashing notification that I'm getting in Teams. Let's check that out. And I mentioned that our devices notified us that canned beans are currently low in stock on aisle 3. Well, this is excellent. Let's go ahead and pretend that we're the user who's placed this order, and now we wanna go to the Contoso Market to pick it up.
So within our application, it's gonna notify us that it sees that we're nearby. It says, "It looks like you're close to the mall. Would you like directions to the store?" Go ahead and click the notification, which would bring us into our application. And you'll see it sees us currently within the mall on floor one. And what's really nice about Azure Maps is that they support multiple levels. So you can see here I can go to floor number two, and we know that the Contoso Market is located up here in stall number 252.
Now, another interesting thing that we can do here is that once we've notified that our customer is within that GPS area that surrounds our marketplace, we can go ahead and fire off another alert. In this time I'm gonna use a logic app to inform our frontline workers that we've got ourselves a customer who has an open order and we should go ahead and prepare it for them. You'll see down here in my Team's instance, again, we've signaled our frontline workers that the customer has arrived and that they're ready to pick up their order. This is excellent. Provides a great customer experience and allows us to streamline getting our goods to our customers.
Now, it's not just the frontline workers who benefit from this as well. The retail manager has set up an inventory application within Microsoft power apps, and this allows them to very easily come in here and see that a particular item might be nearing out of stock. We can go ahead and jump right in, make a new order and replenish that inventory.
Now as I mentioned, all of this data is being stored within an Azure storage account. And what we're using that for is to feed a Microsoft Azure machine learning studio project where we've set up an interactive Jupiter notebook. And what we can do with this, we can go ahead and create a live workbook that will allow us to view all of the sales of items for example, those canned beans for the year 2019. And we can then go ahead and forecast our sales for the following year. So we can look at all of that data and over time, it again becomes valuable allowing us to forecast future purchases for our items that may be experiencing demand.
Now, again, I wanna remind everyone that this demo is available online on GitHub, and you can pull this down and either use it for your own learning to understand all of the moving pieces here, or perhaps go ahead and use this as a baseline for an actual implementation in real projects. Now let's go ahead and take a look at the architecture for a workplace safety solution and know how it applies to the Azure IoT reference architecture.
In this scenario, workers who enter the work site are scanned by a camera, which can be used to ensure that proper safety gear is being worn. If it's not, we can produce a notification to alert the current shift supervisor of the potential safety violation. In addition, we track workers locations by use of their cell phone, allowing us to determine where they are in relation to defined geo-fences. This allows us the ability to alert the supervisor when a worker is found to be in a location that they should not be in. We can then collect all of these results to produce a Power BI report, which summarizes all safety violations that have occurred at the work site over time.
Let's walk through this solution interactively and demonstrate what this architecture might look like in action. The following demo workplace safety is available on GitHub at aka.ms/iot50/workplacesafety. And maybe reuse for learning or as a baseline for your own custom solutions. Let's go ahead and take a look at it in action.
To begin, we'll head to our cognitive services Custom Vision AI project that allows us to create the object detection model that is in use at the workplace safety environment to ensure that employees are wearing the appropriate safety gear upon entering the work site. You'll notice that in our sample project, we've gone ahead and uploaded a number of images containing individuals wearing masks. For each of these sample images, we've gone ahead and tagged the area that surrounds the object that we wish to identify in our object detection model.
Once we've done this for enough samples, we're then able to go ahead and begin training up our base object detection model. And you'll notice there's a couple of options for doing this. You can do quick training, which will allow you to very quickly develop a model or employ CPU resources in the cloud provided through the advanced training service to train up your model. Once the model is created, it will tell us a few metrics around its precision, its recall, and mean average precision.
Once we know that our model is accurate enough for what we need to employ it for, we can then export it to a variety of different devices. For example, your model can run on iOS 11, Android, the popular ONNX format, which is technically cross-platform incompatible with Windows machine learning, a Docker file, which contains an Azure IoT Edge module that's ready to rock and roll and begin doing object detection, or a model that can service our Vision AI DevKit hardware.
Let's go ahead and test the model we just created out. And go ahead over to quick test. And from here, it'll allow us to enter in an image URL. I've got a quick example of an individual wearing a face mask that was pulled up through a Bing image search. We'll go ahead and copy the image link and paste that in. And there you can see our model has been employed against our sample image and you can see that it's appropriately detecting the region that surrounds the mask. Excellent.
Now that we've got that portion figured out, let's go ahead and look at what we might need to do in order to set intelligent geo-fences at the work site to ensure that our workers are in the appropriate areas that they need to be in. I'll start this out by going ahead and creating myself a nice avatar that I can bring into this environment. Now that I've added myself to the map, I can go ahead and move myself around.
Let's go ahead and set the geo-fence. Perhaps we don't want any of our workers heading down here to Skyknoll Lane. You'll notice that I can create a geo-fence of really any geometry that I wish to choose. So this can be any sort of custom outline or something that can supply for our custom map. It's very easy to create whatever shape you need. And now what I'll do, I'll go ahead and simulate this worker entering that newly created geo-fence zone. And you'll notice that upon entering, we receive a notification that mentions that Paul has entered the construction zone without authorization.
So not only have we created an immediate alert that perhaps could trigger say a light or a siren at the work site, we've also taken this data and shipped it up into a report that contains all safety violations that have occurred throughout all of our locations across North America. And you'll see here we can see which device these occurred at. We can also see which days these occurred on or which device was the one that actually captured the safety violation.
Let's go ahead and click into this circle here surrounding Houston and see if we can find the incident that was just created. And you'll see there, we have it. Paul was created a geo-fence safety violation here at the following latitude and longitude points. Now we'll go into a bit more detail on how to use Power BI in conjunction with IoT data in our next demo. But for now, just understand that if you liked this presentation and you'd like to leverage this demo for your own usage, perhaps to learn more or to supply as a baseline for perhaps a real-world implementation, the solution is available on GitHub and will be linked in the following slide.
We've covered a high level overview of Azure IoT solution concepts. Now we are going to deep dive into the area of AI at the edge solutions using the latest concept in computer vision. To demonstrate this, we will explore a real-world project developed using NVIDIA embedded hardware and Azure IoT services to produce a generic solution that consumes multiple video sources to produce insights at the edge and then a cloud by means of a custom object detection model. Let's take a quick look at the technical components that make up an intelligent video application at the device level.
Imagine that we are interested in developing an intelligence security system, one that can track vehicles, people, and pets around the home. To accomplish this, we can employ video feeds as sensor input into an edge device that is capable of decoding frames as input to an object detection service. As you might expect, telemetry from this process can oftentimes be continuous. So there's definitely a need to filter results here.
Going off of the services that we've identified in this presentation, how might we string together service offerings to accomplish this task? To obtain frames, we can leverage an IoT Edge module and pair it with an object detection model from Custom Vision AI. From there we can filter detection results using Azure stream analytics before we publish those results to the cloud. More specifically, what we're going to do is employ something called the NVIDIA DeepStream SDK offering from the Azure Marketplace, which will act as our main ingestion point for incoming feeds and also perform inferencing processing on them as they are received. This module seamlessly integrates with Azure IoT Edge and will allow us to easily operate on telemetry results produced by the object detection model. And as we've discussed, the telemetry that's gonna be produced by the system since it will be in real time could be immense.
So employment of a filtering mechanism is a necessary requirement. Using stream analytics at the edge, we can pass the native output from the DeepStream SDK module also perform operations to remove duplicates and produce a highly accurate summarized result based on input from that event stream. Now because our solution is built on Microsoft Azure, it will naturally align to the Azure IoT reference architecture. Our IoT device will be instrumented with the Azure IoT runtime, which will allow us to get direct communication to an Azure IoT Hub from the device.
On the device itself, we employ a custom object detection model from cognitive services which supplies object detection results to a streaming analytics service that also runs on the device as an IoT Edge module. Filtered results from the streaming analytics module will then flow from the device to the IoT Hub where we can then process that telemetry in the cloud using visualization services. For example, things like Time Series Insights and Power BI. Both of these services will take advantage of the warm path to allow for near real-time display of telemetry as it arrives in the cloud.
In addition, we will support the concept of machine learning apps using a camera tagging module on the device which can capture frames from our video sources and mirror them into a cold storage service. These can later be used to enhance our object detection model as additional training samples. We can then export our updated model from the cognitive services Custom Vision service and update the running object detection model that is currently deployed on the device.
Let's go ahead and take a deep dive into what this all looks like in action. The following demo, intelligent video analytics with NVIDIA Jetson and Microsoft Azure is available on GitHub at aka.ms/iot50/intelligentvideo. This project can be reused for learning or maybe re-implemented as a baseline for your own custom solutions. Let's get started.
So right now you're looking at a live feed of four simultaneous camera feeds that's also performing optic detection inference in real time, utilizing a model that was created in Custom Vision AI. Now this model is capable of detecting vehicles, people, as well as some of the pets that can be found around the home. But we're gonna show you how you can take this solution and connect it with services and Microsoft Azure. For example, things like Time Series Insights or Microsoft Power BI to be able to view telemetry from your devices in your real time.
We'll start by checking out the services that make this up by looking at the resource group for our intelligent video analytics solution. And you might be surprised to notice that there's really only six services employed here and technically one of them is really just the connector for one of the other services. Let's go ahead and revisit our architecture diagram that we were looking at previously in the presentation. The way the solution begins is that we have a device specifically an NVIDIA Jetson device that's running something called the NVIDIA DeepStream SDK as an IoT Edge module. And this is because NVIDIA publishes the DeepStream SDK as a module in our Azure Marketplace.
You can go ahead and pull this down, configure it pointed at camera feeds and then supply it with, in our case, a Custom Vision AI model to be able to perform its referencing in real time on your device and push those results into Azure IoT services. Once we get telemetry from that module, we then forward it to a streaming analytics job that's actually going to run on the device. And this will essentially act as a super filter because the telemetry is gonna be coming at us very, very fast. And in fact, if you look at our live feed, that box, the reason why you're not seeing it go away is because it's producing results that fast.
Once we get our summarized results from that stream analytics job, those will then flow into our IoT Hub. And this means we're not gonna inundate it with every single detection, only the ones that matter the most to us. And we can tune that by modifying the streaming analytics job down at the edge. Once the data appears in IoT Hub, we can then take all telemetry flowing through it and forward it into a Time Series Insights, the connector, where we can then visualize the in Time Series Insights.
Similarly, we can also take that same telemetry and forward it to a stream analytics job that's running in the cloud and then forward that to a Power BI report. And of course, all of this is surrounded by our custom object detection model that's deployed down to our device and leveraged by that DeepStream SDK module. Now, the part that we're going to start off with is how we're actually going to train that model utilizing a service that runs on the device that will allow us to capture samples and then forward those either to Azure storage or directly into Custom Vision AI where they can be used for training. Let's take a quick look at the IoT Hub before we jump into that.
Here you'll see the IoT Hub that backs our solution. And inside here, we can see that registered IoT Edge device. If we click on the device, you can see the deployment that has been applied. And you'll notice that there's six modules that are currently employed. There's the Edge Agent and Edge Hub, which are system modules that are part of the IoT Edge runtime in addition to four custom modules. The first one that we're going to look at here is the camera tagging module. And this is what's going to allow us to obtain samples to train our object detection algorithm utilizing samples captured directly from the environment.
So if we visit port 3000 on our IoT Edge device, you'll notice here that we're presented with that camera capture module. And there's an interface here that allows you to specify a camera feed that you'd like to connect to and from there, you can go ahead and capture samples. So in this case, I'm gonna go ahead and tag this one car and we'll give it a name Test 002 and we'll go ahead and save it.
Now, once we've saved that image, you'll see it comprise with a number of other images that we gathered earlier. And from here we can either choose to upload those images directly into Blob storage or into Custom Vision AI. And this is rather straightforward. For example, if I wanna push these into Blob storage, you can either push to local. And this is useful for environments that don't have outbound internet access, and this will store those images locally. Or we can push directly to a Blob in Microsoft Azure.
I'm gonna go ahead and demonstrate pushing this directly to Custom Vision. Simply provide my endpoint, my training key, and then I can choose the project that I wanna apply it to, and select push and it's that easy. My new training samples are now part of my Custom Vision AI project. Now when I gathered all the samples that you're seeing here, I actually gathered quite a few. I think it was 4,000 or so over a weekend. And I actually stored all of those into Microsoft Azure storage. Now that's an option that you can enable in this demo and we won't cover that, but it's interesting to point out here that your data can arrive either in this cold storage area or directly in the Custom Vision AI where it can be utilized.
From here, I gather a number of different samples taken at different times of day, also in different environment conditions. And this allows me to do things like be able to detect things at night as well as in the day. Once we've got enough samples, we can go ahead and train up our model and again, export that for use on our IoT device. Now specifically what we're employing here's the ONNX model format. So once I've exported that I can then configure that deep stream module to leverage it, pick it up and use it, and it'll begin detecting the objects that our model has been trained to detect. For more details on this, I'm going to include a resource at the end of our presentation that goes into a full deep dive on the Azure IoT Edge camera tagging module.
Now, once we get that model up and running with the DeepStream SDK, it becomes important to filter those results because again, they're flowing at us extremely fast. And the way we're gonna do that is by using a streaming analytics job that we deploy down to the edge. And it's interesting to know how this works. Here I've created a stream analytics query, and I can go ahead and set it up to work with a storage account just like the one we saw a moment ago. I'm gonna go ahead and create a new storage container called DeepStream analytics. And here I can publish my actual job. And what this means is that our IoT Edge module will pull the job down from Azure and then run it locally on the device.
Now similarly, there's also another stream analytics job that we alluded to earlier. And this one's gonna allow us to forward our data into Power BI. There's a slight difference here. This particular stream analytics job lives in the cloud. So it's actually taking that telemetry from the IoT Hub and then forwarding it into Power BI. And the way that it's accomplishing that if we head to the output, there's a specific type of output that you can choose.
In fact, Power BI is just one of many. You can go to an Event Hub, SQL database, Data Lake storage, Table storage, Service Bus Topic Queue, Cosmos DB, Azure function, or Azure Synapse. And once we've got our data forwarded over there, we can begin to make use of it with Power BI reports, which we'll look at in just a moment. But first, we're gonna show a quick software as a service offering known as Time Series Insights. And this is just one of the many visualization tools that you can use in Microsoft Azure.
Once we forward that data over, the only thing that we have to do is model it by specifying a custom type. And once we do that, we can start to graph all of that telemetry through custom hierarchies that we've defined and know when we detect various different objects throughout the home, or when I've seen people specifically or perhaps vehicles in the yard. So let's go ahead and head back to that Power BI demonstration.
So imagine that we have a custom Power BI report that we created in Power BI desktop. We can publish that report into the cloud. It's very straightforward to do that. And once you've done it, you can go ahead and begin to see your results within your office 365 subscription. So I've already gone ahead and done that here. By choosing my workspace, we'll select that and we'll go ahead and push it up. And once that goes into Power BI in the Microsoft Office 365 offering, I can then view that report or perhaps share it out with other individuals on the team. And this is great because I can easily see things at a glance for example, what kind of detections have happened at the home, or perhaps when's the last time that you saw people.
From here, we can go ahead and take this report and we can pin it to a live page. And once you've done that, you can start to view the telemetry coming off of your devices in real time. In fact, the vehicle just drove by and we can see here that our number of cars in the street is incremented to one. And similarly, if I was able to see some people would see this graph start to light up with results. We've also got some telemetry here that shows us some interesting things like the last time and place that the dog and cat were seen.
Now, if you liked this demonstration, we actually show you how to build this from the ground up using live streamed videos that show us building the entire solution from scratch. So if you wanna follow along with us, you can head to the GitHub repo for the intelligent video analytics with NVIDIA Jetson and Microsoft Azure project, and you can watch those videos to discover this project what it's like to build it for yourself and hopefully learn all the skills needed in order to create your own custom intelligent video application.
Our demonstration has shown you the components that make up a generic artificial intelligence at the edge solution. And in doing so, we hope that we have empowered you to understand how you can begin to develop solutions of your own by modifying the object being detected and consulting the Azure IoT reference architecture to accommodate integration into your line of business, you can begin applying AI at edge concepts into a host of new scenarios.
Microsoft has taken this approach to produce a service for enhancing conservation and sustainability of elephants and the recently launched Project 15 Solution Accelerator. To learn more about this project and how it makes use of the Azure IoT reference architecture, check out aka.ms/project15. In closing, whether you are addressing conservation and sustainability efforts for wildlife, modernizing a retail store experience, providing safety at your workplace, or adopting intelligent video analytics at the edge, if you are building IoT solutions on Microsoft Azure, the IoT reference architecture can help guide you to create whatever solution your business needs.
More details on the IoT reference architecture can be found on the following slide or we'll provide links to learn more about all resources mentioned in this presentation. For links to the relevant documentation, resources and demos used in this presentation, check out aka.ms/iot50/resources. If you're interested in using this presentation and or the video recordings for an event of your own, the materials can be found on GitHub at aka.ms/iot50. If you enjoyed the session and are interested in other topics covered in the IoT learning path, you can find them all at aka.ms/iotlp.
We covered quite a few topics in the session and would like to remind that we've curated the collection of modules on the Microsoft Learn platform, which pertained to the topics in this session. This will allow you to interactively learn how to securely connect IoT devices to the cloud via the use of an IoT Edge gateway, build intelligent edge applications on the Edge using Azure IoT Edge, how to create solutions with Azure IoT central and cover details on how to implement streaming analytics, both in the cloud and on the Edge. This presentation and the associated learn modules can help guide you on a path to official certification.
If you're interested in obtaining accreditation that can help you stand out as a certified Microsoft Azure IoT developer, we recommend checking out the AZ-220 Certification. You can find details on the topics covered and schedule an exam today at aka.ms/iot50/certification. For more free interactive learning content, check out Microsoft Learn at microsoft.com/learn to begin your own custom learning path with resources on the latest topics and trends and technology. Thank you again for attending the session. Cheers.