'How do I generate inferences locally from a model fit in Sagemaker?

I built a custom container on Sagemaker to allow me to tune a Catboost model.

I then fit the model with the best hyperparameters (as if I were to deploy on Sagemaker).

I downloaded the tar.gz file onto my machine.

I can read the file:

import tarfile

file = tarfile.open('model.tar.gz')

I can extract from the file:

file.extractall('./output')

I now have a file named catboost_model.dump, but I am unsure where to go from here.

Is it possible load in this .dump file to generate inferences?



Solution 1:[1]

You'd want to load your model artifact into Catboost, using load_model() (probably). or you can use the SageMaker Python SDK to run an inference endpoint locally. You need to have Docker installed to do this

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Neil McGuigan