Skip to content

Commit

Permalink
Updated documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
axsaucedo committed May 19, 2020
1 parent 19b1c06 commit 4abba43
Show file tree
Hide file tree
Showing 22 changed files with 1,756 additions and 310 deletions.
9 changes: 5 additions & 4 deletions doc/source/servers/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Seldon provides several prepacked servers you can use to deploy trained models:
- [XGBoost Server](xgboost.html)
- [Tensorflow Serving](tensorflow.html)
- [MLflow Server](mlflow.html)
- [Custom Servers](custom.md)

For these servers you only need the location of the saved model in a local filestore, Google bucket, S3 bucket, azure or minio. An example manifest with an sklearn server is shown below:

Expand Down Expand Up @@ -158,18 +159,18 @@ and you can [read more](https://kubernetes.io/docs/concepts/configuration/secret

In order for your SeldonDeployment to know what is the name of the secret, we have to specify the name of the secret we created - in the example above we named the secret `seldon-init-container-secret`.

#### 3.1 Default Seldon Core Manager Controller value
#### Option 1: Default Seldon Core Manager Controller value

You can set a global default when you install Seldon Core through the Helm chart through the `values.yaml` variable `executor.defaultEnvSecretRefName`. You can see all the variables available in the [Advanced Helm Installation Page](../reference/helm.rst).

```yaml
# ... other variables
executor:
defaultEnvSecretRefName: seldon-core-init-container-secret
predictiveUnit:
defaultEnvSecretRefName: seldon-init-container-secret
# ... other variables
```

#### 3.2 Override through SeldonDeployment config
#### Option 2: Override through SeldonDeployment config

It is also possible to provide an override value when you deploy your model using the SeldonDeploymen YAML. You can do this through the `envSecretRefName` value:

Expand Down
4 changes: 2 additions & 2 deletions doc/source/servers/tensorflow.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Tensorflow Serving

If you have a trained Tensorflow model you can deploy this directly via REST or gRPC servers.
If you have a trained Tensorflow model you can deploy this directly via REST or gRPC servers.

## MNIST Example

Expand Down Expand Up @@ -79,4 +79,4 @@ spec:
```


Try out a [worked notebook](../examples/server_examples.html)
Try out a [worked notebook](../examples/server_examples.html)
4 changes: 3 additions & 1 deletion doc/source/servers/xgboost.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@ If you have a trained XGBoost model saved you can deploy it simply using Seldon'
Prequisites:

* Use xgboost v0.82
* The model pickle must be named `model.bst`
* The model must be named `model.bst`
* You must save your model using `bst.save_model(file_path)`
* The model is loaded with `xgb.Booster(model_file=model_file)`

An example for a saved Iris prediction model:

Expand Down
80 changes: 79 additions & 1 deletion examples/models/sklearn_iris/sklearn_iris.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,39 @@
" main()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wrap model with Python Wrapper Class"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting IrisClassifier.py\n"
]
}
],
"source": [
"%%writefile IrisClassifier.py\n",
"from sklearn.externals import joblib\n",
"\n",
"class IrisClassifier(object):\n",
"\n",
" def __init__(self):\n",
" self.model = joblib.load('IrisClassifier.sav')\n",
"\n",
" def predict(self,X,features_names):\n",
" return self.model.predict_proba(X)"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down Expand Up @@ -145,6 +178,51 @@
"!kubectl config set-context $(kubectl config current-context) --namespace=seldon"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create Seldon Core config file"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting sklearn_iris_deployment.yaml\n"
]
}
],
"source": [
"%%writefile sklearn_iris_deployment.yaml\n",
"apiVersion: machinelearning.seldon.io/v1alpha2\n",
"kind: SeldonDeployment\n",
"metadata:\n",
" name: seldon-deployment-example\n",
"spec:\n",
" name: sklearn-iris-deployment\n",
" predictors:\n",
" - componentSpecs:\n",
" - spec:\n",
" containers:\n",
" - image: seldonio/sklearn-iris:0.1\n",
" imagePullPolicy: IfNotPresent\n",
" name: sklearn-iris-classifier\n",
" graph:\n",
" children: []\n",
" endpoint:\n",
" type: REST\n",
" name: sklearn-iris-classifier\n",
" type: MODEL\n",
" name: sklearn-iris-predictor\n",
" replicas: 1"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -243,7 +321,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
"version": "3.7.4"
},
"varInspector": {
"cols": {
Expand Down
9 changes: 4 additions & 5 deletions examples/models/sklearn_iris_jsondata/IrisClassifier.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,18 @@
from sklearn.externals import joblib
import sys


def eprint(*args, **kwargs):
print(*args, file=sys.stderr, **kwargs)

class IrisClassifier(object):

class IrisClassifier(object):
def __init__(self):
self.model = joblib.load('IrisClassifier.sav')
self.model = joblib.load("IrisClassifier.sav")

def predict(self,X,features_names):
def predict(self, X, features_names):
eprint("--------------------")
eprint("Input dict")
eprint(X)
eprint("--------------------")
ndarray = X["some_data"]["some_ndarray"]
return self.model.predict_proba(ndarray)

This comment has been minimized.

Copy link
@bydeath

bydeath Jul 3, 2020

#2063
Hey @adriangonz , the return sentence is deleted here.


91 changes: 89 additions & 2 deletions examples/models/sklearn_iris_jsondata/sklearn_iris_jsondata.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,48 @@
" main()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wrap your model with a Python wrapper"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting IrisClassifier.py\n"
]
}
],
"source": [
"%%writefile IrisClassifier.py\n",
"from sklearn.externals import joblib\n",
"import sys\n",
"\n",
"\n",
"def eprint(*args, **kwargs):\n",
" print(*args, file=sys.stderr, **kwargs)\n",
"\n",
"\n",
"class IrisClassifier(object):\n",
" def __init__(self):\n",
" self.model = joblib.load(\"IrisClassifier.sav\")\n",
"\n",
" def predict(self, X, features_names):\n",
" eprint(\"--------------------\")\n",
" eprint(\"Input dict\")\n",
" eprint(X)\n",
" eprint(\"--------------------\")\n",
" ndarray = X[\"some_data\"][\"some_ndarray\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand All @@ -67,7 +109,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Wrap model using s2i"
"Wrap your Python model using s2i"
]
},
{
Expand Down Expand Up @@ -152,6 +194,51 @@
"!kubectl config set-context $(kubectl config current-context) --namespace=seldon"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a seldon config file to deploy the containerized image you just created"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting sklearn_iris_jsondata_deployment.yaml\n"
]
}
],
"source": [
"%%writefile sklearn_iris_jsondata_deployment.yaml\n",
"apiVersion: machinelearning.seldon.io/v1alpha2\n",
"kind: SeldonDeployment\n",
"metadata:\n",
" name: seldon-deployment-example\n",
"spec:\n",
" name: sklearn-iris-deployment\n",
" predictors:\n",
" - componentSpecs:\n",
" - spec:\n",
" containers:\n",
" - image: seldonio/sklearn-iris-jsondata:0.1\n",
" imagePullPolicy: IfNotPresent\n",
" name: sklearn-iris-classifier\n",
" graph:\n",
" children: []\n",
" endpoint:\n",
" type: REST\n",
" name: sklearn-iris-classifier\n",
" type: MODEL\n",
" name: sklearn-iris-predictor\n",
" replicas: 1"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -253,7 +340,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
"version": "3.7.4"
},
"varInspector": {
"cols": {
Expand Down
12 changes: 6 additions & 6 deletions examples/models/sklearn_spacy_text/RedditClassifier.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,17 @@

from ml_utils import CleanTextTransformer, SpacyTokenTransformer


class RedditClassifier(object):
def __init__(self):

self._clean_text_transformer = CleanTextTransformer()
self._spacy_tokenizer = SpacyTokenTransformer()
with open('tfidf_vectorizer.model', 'rb') as model_file:

with open("tfidf_vectorizer.model", "rb") as model_file:
self._tfidf_vectorizer = dill.load(model_file)
with open('lr.model', 'rb') as model_file:

with open("lr.model", "rb") as model_file:
self._lr_model = dill.load(model_file)

def predict(self, X, feature_names):
Expand All @@ -20,4 +21,3 @@ def predict(self, X, feature_names):
tfidf_features = self._tfidf_vectorizer.transform(spacy_tokens)
predictions = self._lr_model.predict_proba(tfidf_features)
return predictions

Loading

0 comments on commit 4abba43

Please sign in to comment.