Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Staging #8

Merged
merged 10 commits into from
Nov 11, 2020
4 changes: 2 additions & 2 deletions docs/tutorials/installation_guide.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial Overview\n",
"# Installation Tutorial\n",
"This tutorial includes instruction on installation and package setup for the progressive learning repository. After following the steps below, you should have the progressive learning and necessary packages installed on your own machine.\n",
"\n",
"## 1. Installation\n",
Expand Down Expand Up @@ -115,7 +115,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
"version": "3.9.0"
}
},
"nbformat": 4,
Expand Down
16 changes: 7 additions & 9 deletions docs/tutorials/random_class_exp.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
"source": [
"# Random Classification Experiment\n",
"\n",
"This experiment will use images from the **CIFAR 100** database (https://www.cs.toronto.edu/~kriz/cifar.html) and showcase the classification efficiency of algorithms in the **Progressive Learning** project (https://github.com/neurodata/progressive-learning)."
"This experiment will use images from the **CIFAR 100** database (https://www.cs.toronto.edu/~kriz/cifar.html) and showcase the classification efficiency of algorithms in the **ProgLearn** project (https://github.com/neurodata/ProgLearn)."
]
},
{
Expand All @@ -36,7 +36,7 @@
"source": [
"## Progressive Learning\n",
"\n",
"The Progressive Learning project aims to improve program performance on sequentially learned tasks, proposing a lifelong learning approach.\n",
"The **ProgLearn** project aims to improve program performance on sequentially learned tasks, proposing a lifelong learning approach.\n",
"\n",
"It contains two different algorithms: **Lifelong Learning Forests** (**L2F**) and **Lifelong Learning Network** (**L2N**). **L2F** uses Uncertainy Forest as transformers, while **L2N** uses deep networks. These two algorithms achieve both forward knowledge transfer and backward knowledge transfer, and this experiment is designed to cover the **L2F** model."
]
Expand All @@ -45,7 +45,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Choosing hyperparameters\n",
"## Choosing hyperparameters\n",
"\n",
"The hyperparameters here are used for determining how the experiment will run."
]
Expand All @@ -68,7 +68,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loading datasets\n",
"## Loading datasets\n",
"\n",
"The CIFAR 100 database contains 100 classes of 600 images, each separating into 500 training images and 100 testing images."
]
Expand All @@ -95,7 +95,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Running experiment\n",
"## Running experiment\n",
"\n",
"The following codes will run multiple experiments in parallel. For each experiment, we have task_num number of tasks. For each task, we randomly select 10 classes of the classes to train on. As we will observe below, each task increases Backwards Transfer Efficiency (BTE) with respect to Task 1 (Task 1 being the first task corresponding to 10 randomly selected classes)."
]
Expand Down Expand Up @@ -137,13 +137,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Plotting backward transfer efficiency\n",
"## Plotting backward transfer efficiency\n",
"\n",
"Backward transfer efficiency (BTE) measures the relative effect of future task data on the performance on a certain task.\n",
"\n",
"\\begin{align}\n",
"BTE^t(f_n):= E[R^t(f_n^{<t}) / R^t(f_n)] \n",
"\\end{align}\n",
"$$BTE^t (f_n) := \\mathbb{E} [R^t (f_n^{<t} )/R^t (f_n)]$$\n",
"\n",
"It is the expected ratio of two risk functions of the learned hypothesis, one with access to the data up to and including the last observation from task t, and the other with access to the entire data sequence. The codes below uses the experiment results to calculate the average BTE numbers and display their changes over tasks learned."
]
Expand Down
14 changes: 7 additions & 7 deletions docs/tutorials/rotation_cifar.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"source": [
"# Rotation CIFAR Experiment\n",
"\n",
"This experiment will use images from the **CIFAR-100** database (https://www.cs.toronto.edu/~kriz/cifar.html) and showcase the backward transfer efficiency of algorithms in the **Progressive Learning** project (https://github.com/neurodata/progressive-learning) as the images are rotated."
"This experiment will use images from the **CIFAR-100** database (https://www.cs.toronto.edu/~kriz/cifar.html) and showcase the backward transfer efficiency of algorithms in the **ProgLearn** project (https://github.com/neurodata/ProgLearn) as the images are rotated."
]
},
{
Expand Down Expand Up @@ -51,7 +51,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Hyperparameters\n",
"## Hyperparameters\n",
"\n",
"Hyperparameters determine how the model will run. \n",
"\n",
Expand Down Expand Up @@ -79,7 +79,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Algorithms\n",
"## Algorithms\n",
"\n",
"The progressive-learning repo contains two main algorithms, **Lifelong Learning Forests** (L2F) and **Lifelong Learning Network** (L2N), within `forest.py` and `network.py`, respectively. The main difference is that L2F uses random forests while L2N uses deep neural networks. Both algorithms, unlike LwF, EWC, Online_EWC, and SI, have been shown to achieve both forward and backward knowledge transfer. \n",
"\n",
Expand All @@ -90,7 +90,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Experiment\n",
"## Experiment\n",
"\n",
"If the chosen algorithm is trained on both straight up-and-down CIFAR images and rotated CIFAR images, rather than just straight up-and-down CIFAR images, will it perform better (achieve a higher backward transfer efficiency) when tested on straight up-and-down CIFAR images? How does the angle at which training images are rotated affect these results?\n",
"\n",
Expand Down Expand Up @@ -122,7 +122,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Rotation CIFAR Plot\n",
"## Rotation CIFAR Plot\n",
"\n",
"This section takes the results of the experiment and plots the backward transfer efficiency against the angle of rotation for the images in **CIFAR-100**.\n",
"\n",
Expand Down Expand Up @@ -172,7 +172,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# FAQs\n",
"## FAQs\n",
"\n",
"### Why am I getting an \"out of memory\" error?\n",
"`Pool(8)` in the previous cell allows for parallel processing, so the number within the parenthesis should be, at max, the number of cores in the device on which this notebook is being run. Even if a warning is produced, the results of the experimented should not be affected.\n",
Expand All @@ -198,7 +198,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
"version": "3.8.5"
}
},
"nbformat": 4,
Expand Down
6 changes: 3 additions & 3 deletions docs/tutorials/uncertaintyforest_fig1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial Overview\n",
"# Uncertainty Forest Figure 1 Tutorial\n",
"This set of two tutorials (`uncertaintyforest_running_example.ipynb` and `uncertaintyforest_fig1.ipynb`) will explain the UncertaintyForest class. After following both tutorials, you should have the ability to run UncertaintyForest code on your own machine and generate Figure 1 from [this paper](https://arxiv.org/pdf/1907.00325.pdf). \n",
"\n",
"If you haven't seen it already, take a look at other tutorials to setup and install the progressive learning package `Installation-and-Package-Setup-Tutorial.ipynb`\n",
"\n",
"# Analyzing the UncertaintyForest Class by Reproducing Figure 1\n",
"## *Goal: Run the UncertaintyForest class to produce the results from Figure 1*\n",
"## Analyzing the UncertaintyForest Class by Reproducing Figure 1\n",
"### *Goal: Run the UncertaintyForest class to produce the results from Figure 1*\n",
"*Note: Figure 1 refers to Figure 1 from [this paper](https://arxiv.org/pdf/1907.00325.pdf)*"
]
},
Expand Down
15 changes: 11 additions & 4 deletions docs/tutorials/uncertaintyforest_running_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial Overview\n",
"# Uncertainty Forest: How to Run Tutorial\n",
"This set of two tutorials (`uncertaintyforest_running_example.ipynb` and `uncertaintyforest_fig1.ipynb`) will explain the UncertaintyForest class. After following both tutorials, you should have the ability to run UncertaintyForest code on your own machine and generate Figure 1 from [this paper](https://arxiv.org/pdf/1907.00325.pdf). \n",
"\n",
"If you haven't seen it already, take a look at other tutorials to setup and install the progressive learning package `Installation-and-Package-Setup-Tutorial.ipynb`\n",
"\n",
"# Simply Running the Uncertainty Forest class\n",
"## *Goal: Train the UncertaintyForest classifier on some training data and produce a metric of accuracy on some test data*"
"## Simply Running the Uncertainty Forest class\n",
"### *Goal: Train the UncertaintyForest classifier on some training data and produce a metric of accuracy on some test data*"
]
},
{
Expand Down Expand Up @@ -216,6 +216,13 @@
"## What's next? --> See a metric on the power of uncertainty forest by generating Figure 1 from [this paper](https://arxiv.org/pdf/1907.00325.pdf)\n",
"### To do this, check out `uncertaintyforest_fig1`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand All @@ -234,7 +241,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
"version": "3.9.0"
}
},
"nbformat": 4,
Expand Down
19 changes: 11 additions & 8 deletions docs/tutorials/xor_nxor_exp.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Progressive Learning in a Simple Environment\n",
"## Gaussian XOR and Gaussian N-XOR Experiment\n",
"# Gaussian XOR and Gaussian N-XOR Experiment\n",
"\n",
"One key goal of progressive learning is to be able to continually improve upon past performance with the introduction of new data, without forgetting too much of the past tasks. This transfer of information can be evaluated using a variety of metrics; however, here, we use a generalization of Pearl's transfer-benefit ratio (TBR) in both the forward and backward directions."
]
Expand All @@ -15,11 +14,15 @@
"metadata": {},
"source": [
"As described in [Vogelstein, et al. (2020)](https://arxiv.org/pdf/2004.12908.pdf), the forward transfer efficiency of task $f_n$ for task $t$ given $n$ samples is:\n",
"$$FTE^t(f_n) := \\mathbb{E}[R^t(f^{t}_n)/R^t(f^{<t}_n)].$$\n",
"\n",
"$$FTE^t (f_n) := \\mathbb{E} [R^t (f^{t}_n )/R^t (f^{<t}_n )]$$\n",
"\n",
"If $FTE^t(f_n)>1$, the algorithm demonstrates positive forward transfer, i.e. past task data has been used to improve performance on the current task.\n",
"\n",
"Similarly, the backward transfer efficiency of task $f_n$ for task $t$ given $n$ samples is:\n",
"$$BTE^t(f_n) := \\mathbb{E}[R^{<t}(f^t_n)/R^t(f^{t}_n)].$$\n",
"\n",
"$$BTE^t (f_n) := \\mathbb{E} [R^t (f_n^{<t} )/R^t (f_n)]$$\n",
"\n",
"If $BTE^t(f_n)>1$, the algorithm demonstrates positive backward transfer, i.e. data from the current task has been used to improve performance on past tasks."
]
},
Expand Down Expand Up @@ -55,7 +58,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Classification Problem\n",
"## Classification Problem\n",
"\n",
"First, let's visualize Gaussian XOR and N-XOR.\n",
"\n",
Expand Down Expand Up @@ -118,7 +121,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### The Experiment\n",
"## The Experiment\n",
"\n",
"Now that we have generated the data, we can prepare to run the experiment. The function for running the experiment, `experiment`, can be found within `functions/xor_nxor_functions.py`. "
]
Expand Down Expand Up @@ -185,7 +188,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Visualizing the Results\n",
"## Visualizing the Results\n",
"\n",
"Now that the experiment is complete, the results can be visualized by extracting the data from these arrays and plotting it. \n",
"\n",
Expand Down Expand Up @@ -267,7 +270,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
"version": "3.8.5"
}
},
"nbformat": 4,
Expand Down