Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

templates updated #34

Merged
merged 1 commit into from
Sep 6, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file not shown.
21 changes: 17 additions & 4 deletions model_catalog_examples/artifact_boilerplate/runtime.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,20 @@
# This is template runtime.yaml. It has default conda environment value. You can replace these values.
MODEL_ARTIFACT_VERSION: '3.0'
MODEL_DEPLOYMENT:
INFERENCE_CONDA_ENV:
INFERENCE_ENV_PATH: <INSERT_INFERENCE_ENV_PATH>
INFERENCE_ENV_SLUG: <INSERT_INFERENCE_ENV_SLUG>
INFERENCE_ENV_TYPE: <INSERT_INFERENCE_ENV_TYPE>
INFERENCE_PYTHON_VERSION: <INSERT_INFERENCE_PYTHON_VERSION>
# Replace with the object storage path of the conda environment you want to use.
# Go to https://docs.oracle.com/en-us/iaas/data-science/using/conda_environ_list.htm for a list of data_science conda environments you can use out-of-the-box
INFERENCE_ENV_PATH: oci://service-conda-packs@id19sfcrra6z/service_pack/cpu/Data Exploration and Manipulation for CPU Python 3.7/2.0/dataexpl_p37_cpu_v2

# Replace with the slug of the environment you want to use.
# Slugs for data_science environment can be found for each environment either in the notebook session Environment Explorer or here:
# https://docs.oracle.com/en-us/iaas/data-science/using/conda_environ_list.htm
INFERENCE_ENV_SLUG: dataexpl_p37_cpu_v2

# Replace with the type of environment. Either published or data_science.
# Published environments are environments that you create and store in your own object storage bucket.
# For more information: https://docs.oracle.com/en-us/iaas/data-science/using/conda_publishs_object.htm
INFERENCE_ENV_TYPE: data_science

# Provide the Python version of the environment.
INFERENCE_PYTHON_VERSION: 3.7
13 changes: 10 additions & 3 deletions model_catalog_examples/artifact_boilerplate/score.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,20 @@
from cloudpickle import cloudpickle

"""
Replace with your own model object and your own serialization library (e.g. pickle, onnx, etc).
You can provide your own model object and your own serialization library (e.g. pickle, onnx, etc).
If no model is specified then predict() by default will return 'Hello World!'
"""
model_name = 'model.pkl'

"""
model_name = 'model.pkl'
"""

"""
Inference script. This script is used for prediction by scoring server when schema is known.
"""


def load_model(model_file_name=model_name):
def load_model(model_file_name=None):
"""
Loads model from the serialized format
WARNING: Please use the same library to load the model which was used to serialise it.
Expand All @@ -25,6 +28,8 @@ def load_model(model_file_name=model_name):
-------
model: a model instance on which predict API can be invoked
"""
if not model_file_name:
return None
model_dir = os.path.dirname(os.path.realpath(__file__))
contents = os.listdir(model_dir)
# --------------------------WARNING-------------------------
Expand All @@ -50,6 +55,8 @@ def predict(data, model=load_model()):
predictions: Output from scoring server
Format: {'prediction':output from model.predict method}
"""
if model is None or len(data) == 0:
return {'prediction':'Hello world!'}
from pandas import read_json, DataFrame
from io import StringIO
data = read_json(StringIO(data)) if isinstance(data, str) else DataFrame.from_dict(data)
Expand Down