Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master'
Browse files Browse the repository at this point in the history
  • Loading branch information
david-z-shi committed Sep 22, 2020
2 parents aa3ecf1 + 28973be commit f48f6c2
Show file tree
Hide file tree
Showing 35 changed files with 1,662 additions and 1,553 deletions.
6 changes: 5 additions & 1 deletion .coveragerc
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,8 @@ exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
@abstract
NotImplementedError
NotImplementedError

[run]
omit =
*__init__.py
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ being contribution of code or documentation to the project. Improving the
documentation is no less important than improving the library itself. If you
find a typo in the documentation, or have made improvements, do not hesitate to
submit a GitHub pull request. Documentation can be found under the
[doc/](https://github.com/neurodata/progressive-learning/tree/master/docs) directory.
[docs/](https://github.com/neurodata/progressive-learning/tree/master/docs) directory.

But there are many other ways to help. In particular answering queries on the
[issue tracker](https://github.com/neurodata/progressive-learning/issues), and
Expand All @@ -27,4 +27,4 @@ Code of Conduct
---------------

We abide by the principles of openness, respect, and consideration of others
of the Python Software Foundation: https://www.python.org/psf/codeofconduct/.
of the Python Software Foundation: https://www.python.org/psf/codeofconduct/.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ python3 setup.py install
```

# Contributing
We welcome contributions from anyone. Please see our [contribution guidelines](http://docs.neurodata.io/progressive-learning/) before making a pull request. Our
We welcome contributions from anyone. Please see our [contribution guidelines](https://github.com/neurodata/progressive-learning/blob/master/CONTRIBUTING.md) before making a pull request. Our
[issues](https://github.com/neurodata/progressive-learning/issues) page is full of places we could use help!
If you have an idea for an improvement not listed there, please
[make an issue](https://github.com/neurodata/progressive-learning/issues/new) first so you can discuss with the
Expand Down
12 changes: 6 additions & 6 deletions docs/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ good feedback:
- If an exception is raised, please **provide the full traceback**.

- Please include your **operating system type and version number**, as well as
your **Python and hyppo versions**. This information
your **Python and ProgL versions**. This information
can be found by running the following code snippet::

import platform; print(platform.platform())
Expand All @@ -61,7 +61,7 @@ good feedback:
Contributing Code
-----------------

The preferred workflow for contributing to `hyppo` is to fork the main
The preferred workflow for contributing to `ProgL` is to fork the main
repository on GitHub, clone, and develop on a branch. Steps:

1. Fork the `project repository <https://github.com/neurodata/progressive-learning>`__ by clicking
Expand All @@ -70,12 +70,12 @@ repository on GitHub, clone, and develop on a branch. Steps:
fork a repository see `this
guide <https://help.github.com/articles/fork-a-repo/>`__.

2. Clone your fork of the ``hyppo`` repo from your GitHub account to your
2. Clone your fork of the ``ProgL`` repo from your GitHub account to your
local disk:

.. code:: bash
$ git clone [email protected]:YourLogin/hyppo.git
$ git clone [email protected]:YourLogin/progressive-learning.git
$ cd progressive-learning
3. Create a ``feature`` branch to hold your development changes:
Expand Down Expand Up @@ -150,7 +150,7 @@ before you submit a pull request:
Coding Guidelines
-----------------

Uniformly formatted code makes it easier to share code ownership. ``hyppo``
Uniformly formatted code makes it easier to share code ownership. ``ProgL``
package closely follows the official Python guidelines detailed in
`PEP8 <https://www.python.org/dev/peps/pep-0008/>`__ that detail how
code should be formatted and indented. Please read it and follow it.
Expand All @@ -164,4 +164,4 @@ guidelines. Please read and follow the
`numpydoc <https://numpydoc.readthedocs.io/en/latest/format.html#overview>`__
guidelines. Refer to the
`example.py <https://numpydoc.readthedocs.io/en/latest/example.html#example>`__
provided by numpydoc.
provided by numpydoc.
958 changes: 958 additions & 0 deletions experiments/parity_experiment/experiment.ipynb

Large diffs are not rendered by default.

270 changes: 270 additions & 0 deletions experiments/parity_experiment/generate_paper_plot.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,270 @@
#%%
import random
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.keras as keras
import seaborn as sns
import matplotlib
import numpy as np
import pickle
from proglearn.sims import generate_gaussian_parity

from sklearn.model_selection import StratifiedKFold
from math import log2, ceil


#%%
def unpickle(file):
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict

def get_colors(colors, inds):
c = [colors[i] for i in inds]
return c

#%%#%% Plotting the result
#mc_rep = 50
fontsize=30
labelsize=28


fig = plt.figure(constrained_layout=True,figsize=(21,30))
gs = fig.add_gridspec(30, 21)

colors = sns.color_palette('Dark2', n_colors=2)

X, Y = generate_gaussian_parity(750)
Z, W = generate_gaussian_parity(750, angle_params=np.pi/2)
P, Q = generate_gaussian_parity(750, angle_params=np.pi/4)

ax = fig.add_subplot(gs[:6,:6])
ax.scatter(X[:, 0], X[:, 1], c=get_colors(colors, Y), s=50)

ax.set_xticks([])
ax.set_yticks([])
ax.set_title('Gaussian XOR', fontsize=30)

plt.tight_layout()
ax.axis('off')
#plt.savefig('./result/figs/gaussian-xor.pdf')

#####################
ax = fig.add_subplot(gs[:6,7:13])
ax.scatter(Z[:, 0], Z[:, 1], c=get_colors(colors, W), s=50)

ax.set_xticks([])
ax.set_yticks([])
ax.set_title('Gaussian N-XOR', fontsize=30)
ax.axis('off')

#####################
ax = fig.add_subplot(gs[:6,14:20])
ax.scatter(P[:, 0], P[:, 1], c=get_colors(colors, Q), s=50)

ax.set_xticks([])
ax.set_yticks([])
ax.set_title('Gaussian R-XOR', fontsize=30)
ax.axis('off')

######################
mean_error = unpickle('plots/mean_xor_nxor.pickle')

n_xor = (100*np.arange(0.5, 7.25, step=0.25)).astype(int)
n_nxor = (100*np.arange(0.5, 7.50, step=0.25)).astype(int)

n1s = n_xor
n2s = n_nxor

ns = np.concatenate((n1s, n2s + n1s[-1]))
ls=['-', '--']
algorithms = ['XOR Forest', 'N-XOR Forest', 'Lifelong Forest', 'Naive Forest']


TASK1='XOR'
TASK2='N-XOR'

fontsize=30
labelsize=28

colors = sns.color_palette("Set1", n_colors = 2)

ax1 = fig.add_subplot(gs[7:13,2:9])
# for i, algo in enumerate(algorithms):
ax1.plot(n1s, mean_error[0,:len(n1s)], label=algorithms[0], c=colors[1], ls=ls[np.sum(0 > 1).astype(int)], lw=3)
ax1.plot(ns[len(n1s):], mean_error[2, len(n1s):], label=algorithms[1], c=colors[1], ls=ls[1], lw=3)

ax1.plot(ns, mean_error[1], label=algorithms[2], c=colors[0], ls=ls[np.sum(1 > 1).astype(int)], lw=3)

ax1.plot(ns, mean_error[4], label=algorithms[3], c='g', ls=ls[np.sum(1 > 1).astype(int)], lw=3)

ax1.set_ylabel('Generalization Error (%s)'%(TASK1), fontsize=fontsize)
ax1.legend(loc='upper left', fontsize=20, frameon=False)
#ax1.set_ylim(0.09, 0.21)
ax1.set_xlabel('Total Sample Size', fontsize=fontsize)
ax1.tick_params(labelsize=labelsize)
#ax1.set_yticks([0.1,0.15, 0.2])
ax1.set_xticks([250,750,1500])
#ax1.axvline(x=750, c='gray', linewidth=1.5, linestyle="dashed")
ax1.set_title('XOR', fontsize=30)

right_side = ax1.spines["right"]
right_side.set_visible(False)
top_side = ax1.spines["top"]
top_side.set_visible(False)

ax1.text(400, np.mean(ax1.get_ylim()), "%s"%(TASK1), fontsize=26)
ax1.text(900, np.mean(ax1.get_ylim()), "%s"%(TASK2), fontsize=26)

#######################
mean_error = unpickle('plots/mean_xor_nxor.pickle')

algorithms = ['XOR Forest', 'N-XOR Forest', 'Lifelong Forest', 'Naive Forest']

TASK1='XOR'
TASK2='N-XOR'

ax1 = fig.add_subplot(gs[7:13,12:19])
ax1.plot(n1s, mean_error[0,:len(n1s)], label=algorithms[0], c=colors[1], ls=ls[np.sum(0 > 1).astype(int)], lw=3)
ax1.plot(ns[len(n1s):], mean_error[2, len(n1s):], label=algorithms[1], c=colors[1], ls=ls[1], lw=3)

ax1.plot(ns[len(n1s):], mean_error[3, len(n1s):], label=algorithms[2], c=colors[0], ls=ls[1], lw=3)
ax1.plot(ns[len(n1s):], mean_error[5, len(n1s):], label=algorithms[3], c='g', ls=ls[1], lw=3)

ax1.set_ylabel('Generalization Error (%s)'%(TASK2), fontsize=fontsize)
#ax1.legend(loc='upper left', fontsize=18, frameon=False)
# ax1.set_ylim(-0.01, 0.22)
ax1.set_xlabel('Total Sample Size', fontsize=fontsize)
ax1.tick_params(labelsize=labelsize)
# ax1.set_yticks([0.15, 0.25, 0.35])
#ax1.set_yticks([0.15, 0.2])
ax1.set_xticks([250,750,1500])
#ax1.axvline(x=750, c='gray', linewidth=1.5, linestyle="dashed")

#ax1.set_ylim(0.11, 0.21)

right_side = ax1.spines["right"]
right_side.set_visible(False)
top_side = ax1.spines["top"]
top_side.set_visible(False)

# ax1.set_ylim(0.14, 0.36)
ax1.text(400, np.mean(ax1.get_ylim()), "%s"%(TASK1), fontsize=26)
ax1.text(900, np.mean(ax1.get_ylim()), "%s"%(TASK2), fontsize=26)

ax1.set_title('N-XOR', fontsize=30)

##################
mean_te = unpickle('plots/mean_te_xor_nxor.pickle')
algorithms = ['Lifelong BTE', 'Lifelong FTE', 'Naive BTE', 'Naive FTE']

TASK1='XOR'
TASK2='N-XOR'

ax1 = fig.add_subplot(gs[15:21,2:9])

ax1.plot(ns, mean_te[0], label=algorithms[0], c=colors[0], ls=ls[0], lw=3)

ax1.plot(ns[len(n1s):], mean_te[1, len(n1s):], label=algorithms[1], c=colors[0], ls=ls[1], lw=3)

ax1.plot(ns, mean_te[2], label=algorithms[2], c='g', ls=ls[0], lw=3)
ax1.plot(ns[len(n1s):], mean_te[3, len(n1s):], label=algorithms[3], c='g', ls=ls[1], lw=3)

ax1.set_ylabel('Forward/Backward \n Transfer Efficiency (FTE/BTE)', fontsize=fontsize)
ax1.legend(loc='upper left', fontsize=20, frameon=False)
#ax1.set_ylim(.99, 1.4)
ax1.set_xlabel('Total Sample Size', fontsize=fontsize)
ax1.tick_params(labelsize=labelsize)
ax1.set_yticks([0,.5,1,1.5])
ax1.set_xticks([250,750,1500])
#ax1.axvline(x=750, c='gray', linewidth=1.5, linestyle="dashed")
right_side = ax1.spines["right"]
right_side.set_visible(False)
top_side = ax1.spines["top"]
top_side.set_visible(False)
ax1.hlines(1, 50,1500, colors='gray', linestyles='dashed',linewidth=1.5)

ax1.text(400, np.mean(ax1.get_ylim()), "%s"%(TASK1), fontsize=26)
ax1.text(900, np.mean(ax1.get_ylim()), "%s"%(TASK2), fontsize=26)

######################
mean_te = unpickle('plots/mean_te_xor_rxor.pickle')
algorithms = ['Lifelong BTE', 'Lifelong FTE', 'Naive BTE', 'Naive FTE']

TASK1='XOR'
TASK2='R-XOR'

ax1 = fig.add_subplot(gs[15:21,12:19])

ax1.plot(ns, mean_te[0], label=algorithms[0], c=colors[0], ls=ls[0], lw=3)

ax1.plot(ns[len(n1s):], mean_te[1, len(n1s):], label=algorithms[1], c=colors[0], ls=ls[1], lw=3)

ax1.plot(ns, mean_te[2], label=algorithms[2], c='g', ls=ls[0], lw=3)
ax1.plot(ns[len(n1s):], mean_te[3, len(n1s):], label=algorithms[3], c='g', ls=ls[1], lw=3)

ax1.set_ylabel('Forward/Backward \n Transfer Efficiency (FTE/BTE)', fontsize=fontsize)
#ax1.legend(loc='upper left', fontsize=20, frameon=False)
#ax1.set_ylim(.99, 1.4)
ax1.set_xlabel('Total Sample Size', fontsize=fontsize)
ax1.tick_params(labelsize=labelsize)
ax1.set_yticks([0,.5,1])
ax1.set_xticks([250,750,1500])
#ax1.axvline(x=750, c='gray', linewidth=1.5, linestyle="dashed")
right_side = ax1.spines["right"]
right_side.set_visible(False)
top_side = ax1.spines["top"]
top_side.set_visible(False)
ax1.hlines(1, 50,1500, colors='gray', linestyles='dashed',linewidth=1.5)

ax1.text(400, np.mean(ax1.get_ylim()), "%s"%(TASK1), fontsize=26)
ax1.text(900, np.mean(ax1.get_ylim()), "%s"%(TASK2), fontsize=26)

########################################################
ax = fig.add_subplot(gs[23:29,2:9])
with open('plots/mean_angle_te.pickle','rb') as f:
te = pickle.load(f)
angle_sweep = range(0,90,1)

ax.plot(angle_sweep,te,c='r',linewidth = 3)
ax.set_xticks(range(0,91,15))
ax.tick_params(labelsize=labelsize)
ax.set_xlabel('Angle of Rotation (Degrees)', fontsize=fontsize)
ax.set_ylabel('Backward Transfer Efficiency (XOR)', fontsize=fontsize)
#ax.set_title("XOR vs. Rotated-XOR", fontsize = fontsize)
ax.hlines(1,0,90, colors='grey', linestyles='dashed',linewidth=1.5)

right_side = ax.spines["right"]
right_side.set_visible(False)
top_side = ax.spines["top"]
top_side.set_visible(False)

#####################################
ax = fig.add_subplot(gs[23:29,12:19])

with open('plots/mean_sample_te.pickle','rb') as f:
te = pickle.load(f)
task2_sample_sweep = (2**np.arange(np.log2(60), np.log2(5010)+1, .25)).astype('int')

ax.plot(task2_sample_sweep,te,c='r',linewidth = 3)
ax.hlines(1, 60,5200, colors='gray', linestyles='dashed',linewidth=1.5)
ax.set_xscale('log')
ax.set_xticks([])
ax.set_yticks([0.98,1,1.02,1.04])
ax.tick_params(labelsize=26)
ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.text(50, np.mean(ax.get_ylim())-.042, "50", fontsize=labelsize)
ax.text(500, np.mean(ax.get_ylim())-.042, "500", fontsize=labelsize)
ax.text(5000, np.mean(ax.get_ylim())-.042, "5000", fontsize=labelsize)

ax.text(50, np.mean(ax.get_ylim())-.047, "Number of $25^\circ$-RXOR Training Samples", fontsize=fontsize-4)
ax.set_ylabel('Backward Transfer Efficiency (XOR)',fontsize=24)

right_side = ax.spines["right"]
right_side.set_visible(False)
top_side = ax.spines["top"]
top_side.set_visible(False)


plt.savefig('./plots/parity_exp.pdf')
# %%
Binary file not shown.
Loading

0 comments on commit f48f6c2

Please sign in to comment.