-
Notifications
You must be signed in to change notification settings - Fork 252
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* High level documentation skeleton * Share landing page between docs and Github readme * Decouple docs landing from Github readme * Add bare-bones landing page * Add initial getting started page * Add roadmap page * Add landing page to sidebar TOC * Add algorithm overview page * Hide homepage from TOC * Fix typo, add link to algorithm overviews * Use newly exposed sub-packageas as modules * Add a few more sentences about features and goals * Stylize link to source on methods docs
- Loading branch information
Showing
10 changed files
with
160 additions
and
9 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,12 @@ | ||
# alibi | ||
Algorithms for monitoring and explaining machine learning models | ||
# Alibi | ||
|
||
[Alibi](https://github.com/SeldonIO/alibi) is a Python library aimed at machine learning model | ||
inspection and interpretation. | ||
|
||
* [Documentation](https://docs.seldon.io/projects/alibi/en/latest/) | ||
|
||
## Installation | ||
Alibi can be installed from [PyPI](https://pypi.org/project/alibi): | ||
```bash | ||
pip install alibi | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
# Alibi | ||
|
||
[Alibi](https://github.com/SeldonIO/alibi) is a open source Python library aimed at machine learning | ||
model inspection and interpretation. The initial focus on the library is on black-box, instance | ||
based model explanations. | ||
|
||
## Goals | ||
* Provide high quality reference implementations of black-box ML model explanation algorithms | ||
* Define a consistent API for interpretable ML methods | ||
* Support multiple use cases (e.g. tabular, text and image data classification, regression) | ||
* Implement the latest model explanation, concept drift, algorithmic bias detection and other ML | ||
model monitoring and interpretation methods | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
# Algorithm overview | ||
|
||
This page provides a high-level overview of the algorithms and their features currently implemented | ||
in Alibi. | ||
|
||
## Model Explanations | ||
These algorithms provide instance-specific (sometimes also called "local") explanations of ML model | ||
predictions. Given a single instance and a model prediction they aim to answer the question "Why did | ||
my model make this prediction?" The following table summarizes the capabilities of the current | ||
algorithms: | ||
|
||
|Explainer|Classification|Regression|Categorical features|Tabular|Text|Images|Needs training set| | ||
|---|---|---|---|---| | ||
|[Anchors](../methods/Anchors.ipynb)|✔|✘|✔|✔|✔|✔|For Tabular| | ||
|[CEM](../methods/CEM.ipynb)|✔|✘|✘|✔|✘|✔|Optional| | ||
|
||
**Anchor explanations**: produce an "anchor" - a small subset of features and their ranges that will | ||
almost always result in the same model prediction. [Documentation](../methods/Anchors.ipynb), | ||
[tabular example](../examples/anchor_tabular_adult.nblink), | ||
[text classification](../examples/anchor_text_movie.nblink), | ||
[image classification](../examples/anchor_image_imagenet.nblink). | ||
|
||
**Contrastive explanation method (CEM)**: produce a pertinent positive (PP) and a pertinent negative | ||
(PN) instance. The PP instance finds the features that should me minimally and sufficiently present | ||
to predict the same class as the original prediction (a PP acts as the "most compact" representation | ||
of the instance to keep the same prediciton). The PN instance identifies the features that should be | ||
minimally and necessarily absent to maintain the original prediction (a PN acts as the closest | ||
instance that would result in a different prediction). [Documentation](../methods/CEM.ipynb), | ||
[tabular example](../examples/cem_iris.ipynb), [image classification](../examples/cem_mnist.ipynb). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
# Getting Started | ||
|
||
## Installation | ||
Alibi works with Python 3.5+ and can be installed from [PyPI](https://pypi.org/project/alibi): | ||
```bash | ||
pip install alibi | ||
``` | ||
|
||
## Features | ||
Alibi is a Python package designed to help explain the predictions of machine learning models, gauge | ||
the confidence of predictions and eventually support wider capabilities of inspecting the | ||
performance of models with respect to concept drift and algorithmic bias. The focus of the library | ||
is to support the widest range of models using black-box methods where possible. | ||
|
||
To get a list of the latest available model explanation algorithms, you can type: | ||
```python | ||
import alibi | ||
alibi.explainers.__all__ | ||
``` | ||
<div class="highlight"><pre> | ||
['AnchorTabular', 'AnchorText', 'AnchorImage', 'CEM'] | ||
</pre></div> | ||
|
||
For detailed information on the methods: | ||
* [Overview of available methods](../overview/algorithms.md) | ||
* [Anchor explanations](../methods/Anchors.ipynb) | ||
* [Contrastive Explanation Method (CEM)](../methods/CEM.ipynb) | ||
|
||
## Basic Usage | ||
We will use the [Anchor method on tabular data](../methods/Anchors.ipynb#Tabular-Data) to illustrate | ||
the usage of explainers in Alibi. | ||
|
||
First, we import the explainer: | ||
```python | ||
from alibi.explainers import AnchorTabular | ||
``` | ||
Next, we initialize it by passing it a prediction function and any other necessary arguments: | ||
```python | ||
explainer = AnchorTabular(predict_fn, feature_names) | ||
``` | ||
Some methods require an additional `.fit` step which requires access to the training set the model | ||
was trained on: | ||
```python | ||
explainer.fit(X_train) | ||
``` | ||
Finally, we can call the explainer on a test instance which will return a dictionary containing the | ||
explanation and any additional metadata returned by the computation: | ||
```python | ||
explainer.explain(x) | ||
``` | ||
The exact details will vary slightly from method to method, so we encourage the reader to become | ||
familiar with the [types of algorithms supported](../overview/algorithms.md) in Alibi. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
# Roadmap | ||
Alibi aims to be the go-to library for ML model interpretability and monitoring. There are multiple | ||
challenges for developing a high-quality, production-ready library that achieves this. In addition | ||
to having high quality reference implementations of the most promising algorithms, we need extensive | ||
documentation and case studies comparing the different interpretability methods and their respective | ||
pros and cons. A clean and a usable API is also a priority. Additionally we want to move beyond | ||
model explanation and provide tools to gauge ML model confidence, measure concept drift, detect | ||
outliers and algorithmic bias among other things. | ||
|
||
## Additional explanation methods | ||
* [Counterfactual examples](https://christophm.github.io/interpretable-ml-book/counterfactual.html) | ||
[[WIP](https://github.com/SeldonIO/alibi/pull/35)] | ||
* [Influence functions](https://arxiv.org/abs/1703.04730) | ||
* Feature attribution methods (e.g. [SHAP](https://github.com/slundberg/shap)) | ||
* Global methods (e.g. [ALE](https://christophm.github.io/interpretable-ml-book/ale.html#fn31)) | ||
|
||
## Important enhancements to explanation methods | ||
* Robust handling of categorical variables | ||
([Github issue](https://github.com/SeldonIO/alibi/issues/33)) | ||
* Document pitfalls of popular methods like LIME and PDP | ||
([Github issue](https://github.com/SeldonIO/alibi/issues/42)) | ||
* Unified API ([Github issue](https://github.com/SeldonIO/alibi/issues/23)) | ||
* Standardized return types for explanations | ||
* Explanations for regression models ([Github issue](https://github.com/SeldonIO/alibi/issues/19)) | ||
* Explanations for sequential data | ||
* Develop methods for highly correlated features | ||
|
||
## Beyond explanations | ||
* Investigate alternatives to Trust Scores for gauging the confidence of black-box models | ||
* Concept drift - provide methods for monitoring and alerting to changes in the incoming data | ||
distribution and the conditional distribution of the predictions | ||
* Bias detection methods | ||
* Outlier detection methods ([Github issue](https://github.com/SeldonIO/alibi/issues/13)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters