'How do I take an already existing MLflow model on my local filesystem and log it to a remote tracking server?

Let's say I already have an existing MLflow model on my local system of the mlflow.pyfunc flavor.

The directory looks like this

model/
  data/
  code/
  conda.yml
  MLmodel

Where MLmodel is something like

flavors:
  python_function:
    code: code
    data: data
    env: conda.yml
    loader_module: loader # model/code/loader.py has the entrypoint

I now try and log this model to a remote tracking server using (I'm in the directory above model/, so ./model/data works, etc)

import mlflow
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.pyfunc.log_model(
  "my-model-artifact",
  registered_model_name="my-model", # same for all model versions,
  data_path="model/data",
  code_path="model/code",
  loader_module="model/code/loader"
)

The tracking server ends up logging a nested MLflow model.. this is inside of the ./artifacts/my-model-artifact directory on the tracking server

./artifacts/my-model-artifact
  conda.yaml
  MLmodel # *not* my MLmodel, one newly generated by MLflow
  data/
  code/

Where data now points nested to my entire model/data directory and code points to a nested model/code directory.

It's like it doesn't understand that I already have this full artifact..



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source