'How to use multiple ml models trained on different input data to produce one model and give prediction in Sagemaker?
I am working on a saree tags extraction problem. Tags are like Saree color, Saree type, border design type etc. There are total 176 different tags.
Initially I worked on it as a multi-label problem in which I had used 176 Sigmoid function in the output layer. But it did not work as expected and the accuracy I got was very poor.
Since all the labels in my problems are not independent e.g. If saree is of green color then It won't be red or black, If saree is of Banarasi type then it won't be of other type mentioned in my tags list. So now I am planning to use multiple ML models and each model will be multi class classification model like one model will predict color, another one will predict type, another will predict weight and so on..
I am using aws sagemaker to build and deploy models, but my problem is how to deploy all these models via sagemaker sothat all models will be called and at the end combined output of all should be sent.
I explored multimodel sagemaker endpoint deployment but in that only one model can be used for prediction. So it didn't fulfil my purpose.
Any suggestion or help would be highly appreciated.
Solution 1:[1]
Hey Chetan you should be able to use SageMaker Multi-Model Endpoints in this use case. When invoking the endpoint you would merely specify the target model in the API call using the Python SDK as seen below.
import boto3
import json
from sagemaker.serializers import JSONSerializer
endpoint_name = predictor.endpoint_name
##########
#Specify model in target_model parameter
##########
target_model = "petrol.tar.gz"
jsons = JSONSerializer()
payload = jsons.serialize(sampInput)
response = runtime_sm_client.invoke_endpoint(
EndpointName=endpoint_name,
TargetModel=target_model,
Body=payload)
result = json.loads(response['Body'].read().decode())['outputs']
result
I would train your models needed and then put these artifacts in a Multi-Model Endpoint. Then based off of the prediction you want invoke the necessary model. I've attached an end to end TensorFlow Multi-Model Endpoint example where I train, deploy, and invoke two separate models.
TF MME Example: https://github.com/RamVegiraju/SageMaker-Deployment/blob/master/RealTime/Multi-Model-Endpoint/TensorFlow/tf2-MME-regression.ipynb
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Ram Vegiraju |
