'How to deploy a ML model as a local iot edge module

I have a machine learning model registered in the model registry of my Azure Machine Learning workspace.

Now I want to containerize such model inside a Linux docker image exposing a rest api; then, I have to deploy it as an IoT Edge module to an edge PC, where other modules will invoke it locally and receive predictions.

I have searched in Microsoft documentation, but I haven't found a solution to my problem, since the suggested examples and tutorials talk about deploying entire iot edge solutions or using azure ml cli to deploy models, instead I have to add this module to an existing deployment manifest where other modules are already present.



Solution 1:[1]

The way IoT Edge works is that it uses Routes to manage communication between modules, IoT Hub, and any leaf devices. The $edgeHub module twin contains a desired property called routes that declares how messages are passed within a deployment.

I have to add this module to an existing deployment manifest where other modules are already present.

What you need to do is to edit the Deployment manifest (see an example here) and add your module inside $edgeAgent.properties.desired.modules + define your routes in $edgeHub.properties.desired.routes . This can also be done in the Azure Portal UI.

other modules will invoke it locally and receive predictions.

Since your implementation is using a Rest Api your route would just need to guarantee that, before you send the data INTO $upstream, it is sent to your new ML module, something like: INTO BrokeredEndpoint(\"/modules/YOURMLMODEL/inputs/input1\")"

You can learn more about how to deploy modules and establish routes in IoT Edge here.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 asergaz