Skip to content

Commit

Permalink
WIP: High level docs (#43)
Browse files Browse the repository at this point in the history
* High level documentation skeleton

* Share landing page between docs and Github readme

* Decouple docs landing from Github readme

* Add bare-bones landing page

* Add initial getting started page

* Add roadmap page

* Add landing page to sidebar TOC

* Add algorithm overview page

* Hide homepage from TOC

* Fix typo, add link to algorithm overviews

* Use newly exposed sub-packageas as modules

* Add a few more sentences about features and goals

* Stylize link to source on methods docs
  • Loading branch information
jklaise authored Apr 30, 2019
1 parent d78f059 commit 62f57bc
Show file tree
Hide file tree
Showing 10 changed files with 160 additions and 9 deletions.
14 changes: 12 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,12 @@
# alibi
Algorithms for monitoring and explaining machine learning models
# Alibi

[Alibi](https://github.com/SeldonIO/alibi) is a Python library aimed at machine learning model
inspection and interpretation.

* [Documentation](https://docs.seldon.io/projects/alibi/en/latest/)

## Installation
Alibi can be installed from [PyPI](https://pypi.org/project/alibi):
```bash
pip install alibi
```
5 changes: 3 additions & 2 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

# -- Project information -----------------------------------------------------

project = 'alibi'
project = 'Alibi'
copyright = '2019, Seldon Technologies Ltd'
author = 'Seldon Technologies Ltd'

Expand Down Expand Up @@ -48,12 +48,13 @@
'sphinx.ext.mathjax',
'sphinx.ext.ifconfig',
'sphinx.ext.viewcode',
'recommonmark',
#'recommonmark',
'sphinx.ext.napoleon',
'sphinx_autodoc_typehints',
'sphinxcontrib.apidoc', # automatically generate API docs, see https://github.com/rtfd/readthedocs.org/issues/1139
'nbsphinx',
'nbsphinx_link', # for linking notebooks from outside sphinx source root
'm2r'
]

# nbsphinx settings
Expand Down
17 changes: 15 additions & 2 deletions doc/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,21 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to alibi's documentation!
=================================
.. mdinclude:: landing.md

.. toctree::
:maxdepth: 1
:hidden:

Home <self>

.. toctree::
:maxdepth: 1
:caption: Overview

overview/getting_started
overview/algorithms
overview/roadmap

.. toctree::
:maxdepth: 1
Expand Down
13 changes: 13 additions & 0 deletions doc/source/landing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Alibi

[Alibi](https://github.com/SeldonIO/alibi) is a open source Python library aimed at machine learning
model inspection and interpretation. The initial focus on the library is on black-box, instance
based model explanations.

## Goals
* Provide high quality reference implementations of black-box ML model explanation algorithms
* Define a consistent API for interpretable ML methods
* Support multiple use cases (e.g. tabular, text and image data classification, regression)
* Implement the latest model explanation, concept drift, algorithmic bias detection and other ML
model monitoring and interpretation methods

2 changes: 1 addition & 1 deletion doc/source/methods/Anchors.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[source](../api/alibi.explainers.anchor.rst)"
"[[source]](../api/alibi.explainers.anchor.rst)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion doc/source/methods/CEM.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[source](../api/alibi.explainers.cem.rst)"
"[[source]](../api/alibi.explainers.cem.rst)"
]
},
{
Expand Down
29 changes: 29 additions & 0 deletions doc/source/overview/algorithms.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Algorithm overview

This page provides a high-level overview of the algorithms and their features currently implemented
in Alibi.

## Model Explanations
These algorithms provide instance-specific (sometimes also called "local") explanations of ML model
predictions. Given a single instance and a model prediction they aim to answer the question "Why did
my model make this prediction?" The following table summarizes the capabilities of the current
algorithms:

|Explainer|Classification|Regression|Categorical features|Tabular|Text|Images|Needs training set|
|---|---|---|---|---|
|[Anchors](../methods/Anchors.ipynb)|||||||For Tabular|
|[CEM](../methods/CEM.ipynb)|||||||Optional|

**Anchor explanations**: produce an "anchor" - a small subset of features and their ranges that will
almost always result in the same model prediction. [Documentation](../methods/Anchors.ipynb),
[tabular example](../examples/anchor_tabular_adult.nblink),
[text classification](../examples/anchor_text_movie.nblink),
[image classification](../examples/anchor_image_imagenet.nblink).

**Contrastive explanation method (CEM)**: produce a pertinent positive (PP) and a pertinent negative
(PN) instance. The PP instance finds the features that should me minimally and sufficiently present
to predict the same class as the original prediction (a PP acts as the "most compact" representation
of the instance to keep the same prediciton). The PN instance identifies the features that should be
minimally and necessarily absent to maintain the original prediction (a PN acts as the closest
instance that would result in a different prediction). [Documentation](../methods/CEM.ipynb),
[tabular example](../examples/cem_iris.ipynb), [image classification](../examples/cem_mnist.ipynb).
52 changes: 52 additions & 0 deletions doc/source/overview/getting_started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Getting Started

## Installation
Alibi works with Python 3.5+ and can be installed from [PyPI](https://pypi.org/project/alibi):
```bash
pip install alibi
```

## Features
Alibi is a Python package designed to help explain the predictions of machine learning models, gauge
the confidence of predictions and eventually support wider capabilities of inspecting the
performance of models with respect to concept drift and algorithmic bias. The focus of the library
is to support the widest range of models using black-box methods where possible.

To get a list of the latest available model explanation algorithms, you can type:
```python
import alibi
alibi.explainers.__all__
```
<div class="highlight"><pre>
['AnchorTabular', 'AnchorText', 'AnchorImage', 'CEM']
</pre></div>

For detailed information on the methods:
* [Overview of available methods](../overview/algorithms.md)
* [Anchor explanations](../methods/Anchors.ipynb)
* [Contrastive Explanation Method (CEM)](../methods/CEM.ipynb)

## Basic Usage
We will use the [Anchor method on tabular data](../methods/Anchors.ipynb#Tabular-Data) to illustrate
the usage of explainers in Alibi.

First, we import the explainer:
```python
from alibi.explainers import AnchorTabular
```
Next, we initialize it by passing it a prediction function and any other necessary arguments:
```python
explainer = AnchorTabular(predict_fn, feature_names)
```
Some methods require an additional `.fit` step which requires access to the training set the model
was trained on:
```python
explainer.fit(X_train)
```
Finally, we can call the explainer on a test instance which will return a dictionary containing the
explanation and any additional metadata returned by the computation:
```python
explainer.explain(x)
```
The exact details will vary slightly from method to method, so we encourage the reader to become
familiar with the [types of algorithms supported](../overview/algorithms.md) in Alibi.
33 changes: 33 additions & 0 deletions doc/source/overview/roadmap.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Roadmap
Alibi aims to be the go-to library for ML model interpretability and monitoring. There are multiple
challenges for developing a high-quality, production-ready library that achieves this. In addition
to having high quality reference implementations of the most promising algorithms, we need extensive
documentation and case studies comparing the different interpretability methods and their respective
pros and cons. A clean and a usable API is also a priority. Additionally we want to move beyond
model explanation and provide tools to gauge ML model confidence, measure concept drift, detect
outliers and algorithmic bias among other things.

## Additional explanation methods
* [Counterfactual examples](https://christophm.github.io/interpretable-ml-book/counterfactual.html)
[[WIP](https://github.com/SeldonIO/alibi/pull/35)]
* [Influence functions](https://arxiv.org/abs/1703.04730)
* Feature attribution methods (e.g. [SHAP](https://github.com/slundberg/shap))
* Global methods (e.g. [ALE](https://christophm.github.io/interpretable-ml-book/ale.html#fn31))

## Important enhancements to explanation methods
* Robust handling of categorical variables
([Github issue](https://github.com/SeldonIO/alibi/issues/33))
* Document pitfalls of popular methods like LIME and PDP
([Github issue](https://github.com/SeldonIO/alibi/issues/42))
* Unified API ([Github issue](https://github.com/SeldonIO/alibi/issues/23))
* Standardized return types for explanations
* Explanations for regression models ([Github issue](https://github.com/SeldonIO/alibi/issues/19))
* Explanations for sequential data
* Develop methods for highly correlated features

## Beyond explanations
* Investigate alternatives to Trust Scores for gauging the confidence of black-box models
* Concept drift - provide methods for monitoring and alerting to changes in the incoming data
distribution and the conditional distribution of the predictions
* Bias detection methods
* Outlier detection methods ([Github issue](https://github.com/SeldonIO/alibi/issues/13))
2 changes: 1 addition & 1 deletion requirements/requirements_ci.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ flake8>=3.7.7
mypy>=0.670
sphinx-autodoc-typehints>=1.6.0
sphinx-rtd-theme>=0.4.3
recommonmark>=0.5.0
m2r>=0.2.1
sphinxcontrib-apidoc>=0.3.0
nbsphinx>=0.4.2
nbsphinx-link>=1.2.0
Expand Down

0 comments on commit 62f57bc

Please sign in to comment.