-
Notifications
You must be signed in to change notification settings - Fork 837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MLFlow Model on MinIO Not Loading #2213
Comments
strange. I see issues like this for empty directories: mlflow/mlflow#1881 |
Hey @srajabi, could you share the content of your |
I ran into the same issue as @srajabi My deployment:
Contents of /mnt/models directory:
Contents of MLmodel file:
|
Hey @srajabi @mafs12, after looking a bit deeper on the MLflow side, it seems that their if os.path.isfile(path):
# Scikit-learn models saved in older versions of MLflow (<= 1.9.1) specify the ``data``
# field within the pyfunc flavor configuration. For these older models, the ``path``
# parameter of ``_load_pyfunc()`` refers directly to a serialized scikit-learn model
# object. In this case, we assume that the serialization format is ``pickle``, since
# the model loading procedure in older versions of MLflow used ``pickle.load()``.
serialization_format = SERIALIZATION_FORMAT_PICKLE
else:
# In contrast, scikit-learn models saved in versions of MLflow > 1.9.1 do not
# specify the ``data`` field within the pyfunc flavor configuration. For these newer
# models, the ``path`` parameter of ``load_pyfunc()`` refers to the top-level MLflow
# Model directory. In this case, we parse the model path from the MLmodel's pyfunc
# flavor configuration and attempt to fetch the serialization format from the
# scikit-learn flavor configuration Based on that, this should be fixed by updating MLflow to the latest version in the In the meantime, you can either:
|
/priority p1 |
I see this trying to run with mlflow model from
and conda.yaml
|
The above error seems to have for the mlflow 1.8.0 version we have for our mlflow server but not 1.11.0 |
Have you tried ensuring you have the correct rclone settings? If there is an issue can you open a new one. |
Setup:
Jupyter notebook generates a simple sklearn model, it's sent to MLFlow which stores it in MinIO. I'm now trying to get Seldon to create a deployment from this:
Container loads up, all the way to:
Looking at what /mnt/models looks like:
I can load this successfully via
sk_model = mlflow.sklearn.load_model("s3://mlflow/artifacts/2/adaaee4b5c694f02b5ff9745c53ae75e/artifacts/nb")
Just not from Seldon. Any ideas? Am I missing something in setting this up?
The text was updated successfully, but these errors were encountered: