Python code and download links to the data of Bill et al., "Hierarchical structure is employed by humans during visual motion perception" (preprint).
This repository allows you to:
- Generate figures 2, 3, 4 and 5 from the main paper,
- Collect your own data,
- Run the full analysis pipeline (if you are willing to dig into the code, a bit).
In case of questions, please contact Johannes Bill ([email protected]).
We assume a Ubuntu-based Linux installation. On Mac, you should be able to homebrew with sip and pyqt. In the cloned repository, we suggest to use a virtual environment with Python 3.6+:
$ python3 -m pip install --user --upgrade pip # Install pip (if not yet installed)
$ sudo apt-get install python3-venv # May be needed for environment creation
$ python3.6 -m venv env # Create environment with the right python interpreter (must be installed)
$ source env/bin/activate # Activate env
$ python3 -m pip install --upgrade pip # Make sure the local pip is up to date
$ pip3 install wheel # Install wheel first
$ pip3 install -r requirements.txt # Install other required packages
$ deactivate # Deactivate env
Always start your session by running source run_at_start.sh
and end it with source run_at_end.sh
. These will set up the virtual environment and python path. Here are some cookbooks.
Re-plotting the figures from the main paper is quick and easy:
$ source run_at_start.sh
$ cd plot
$ python3 plot_fig_2.py # Plot Figure 2
$ python3 plot_fig_3.py # Plot Figure 3
$ python3 plot_fig_4.py # Plot Figure 4
$ python3 plot_fig_5.py # Plot Figure 5
$ cd ..
$ source run_at_end.sh
All figures will be saved in ./plot/fig/
as png and pdf.
This experiment requires Python as well as MATLAB with Psychtoolbox. Please make sure to have at least 2GB of disk space available per participant. Questions on the data collection for the MOT experiment can also be directed to Hrag Pailian ([email protected]).
- Generate trials:
$ source run_at_start.sh
$ cd rmot/generate_stim
- Adjust
nSubjects=...
in filegenerate_trials_via_script.sh
to your needs. - Generate trials via
$ ./generate_trials_via_script.sh
(This may take a while depending on processor power.) - Resulting trials are written to:
data/rmot/myexp/trials
for the Python data (will be needed for simulations and analyzes)data/rmot/myexp/matlab_trials
for the data collection with MATLAB
- Run the experiment: For each participant
n=1,..
- Copy the content of
data/rmot/myexp/matlab_trials/participant_n/
intormot/matlab_gui/Trials/
. $ cd ../matlab_gui
- Determine the participant's speed via repeated execution of
Part_1_Thresholding.m
(will prompt for speed on start). - Conduct the main experiment via
Part_2_Test.m
(will prompt for speed andn
). - Copy the saved responses to
data/rmot/myexp/responses/
and rename the file toResponse_File_Test_Pn.mat
.
- Convert the data back to Python format:
$ cd ../ana
- For each participant
n=1,..
, run
$ python3 convert_mat_to_npy.py data/myexp/responses/Response_File_Test_Pn.mat
. $ cd ../..
$ source run_at_end.sh
Continue with the data analysis (see below).
This experiment is fully Python-based.
$ source run_at_start.sh
$ cd pred/gui
$ python3 play.py presets/example_trials/GLO.py -f -T 10 # EITHER: try out 10 trials (ca. 2 min)
$ ./run_full_experiment.sh -u 12345 # OR: run the full experiment (ca. 75 min)
$ cd ../..
$ source run_at_end.sh
Continue with the data analysis (below).
If you run the full experiment, your data will be stored in /data/pred/myexp/
.
Please refer to /pred/gui/README.md
for further information -- especially to ensure a stable frame rate before running a full experiment.
The data from the publication can be downloaded here:
- MOT experiment (~445kB): https://ndownloader.figshare.com/files/17670059
- Prediction experiment (~282MB): https://ndownloader.figshare.com/files/17670065
For below analyses, unzip the content of these archives into the directories data/rmot/paper
and data/pred/paper
respectively. Then, execute steps 1. and 3. (replacing myexp
with paper
) in the description of Collect your own data >> MOT experiment.
Remark: The following description for the data analysis still refers to the 1st version of the manuscript. The data and analyses are generally identical with the 2nd version, but do not yet include the Bayesian model comparison across motion structures and the alternative observer models in the MOT task, presented in Figure 3. An updated description will be provided soon.
Use the following analysis chain to recreate the aggregate data files provided in /data
from the raw data in /data/rmot/paper
and /data/pred/paper
-- or to analyze your own data (see above). The analysis may require some understanding of the Python code. So, please, do not expect a direct copy-and-paste workflow.
$ source run_at_start.sh
$ cd rmot/ana
- Set up a data set labels (DSL) file to link human data to simulation data:
- You can use
DSLs_rmot_template.py
as a template. - Adjust
exppath
andsubjects
. Make suresimpath
exists. - For each participant, create an entry block and enter the participant's
["speed"]
(from above 'thresholding'). - The
["sim"]
entries will be filled later.
- Set up the
config_datarun.py
file for simulations:
- You can use
config_datarun_template.py
as a template. - Adjust the
import
to import from your DSL file and ensure thatcfg["global"]["outdir"]
exists. - Adjust
cfg["observe"]["datadir"]
to point to the (Python) trials. - You may want to reduce
reps_per_trial
from 25 to 1 to speed up the simulation (optional).
- Prepare the simulations in
create_config_for_participant.py
:
- Adjust lines
8-11
to match your DSLs, config, and trial directory.
- Run observer models with different motion structure priors on the experiment trials:
- For each participant and stimulus condition:
- Adjust lines
6
and7
increate_config_for_participant.py
. - Run
$ ./start_datarun_script.sh
. - Enter the
DSL
of the simulation in your DSL file's["sim"]
entry of the respective participant and condition. - Warning: The simulations may take a while (we used the HMS cluster).
- Adjust lines
- Collect all results via
$ python3 load_human_and_sim_to_pandas.py
(adjust line7
). - Copy the created
pkl.zip
file to the repository's/data/
directory.
- Plot the figure:
$ cd ../../plot
- Adjust
fname_data=
to point to your data inplot_fig_2.py
. $ python3 plot_fig_2.py # Plot Figure 2
$ cd ..
$ source run_at_end.sh
$ source run_at_start.sh
$ cd pred/ana
- Run Kalman filters with different motion priors on the experiment trials:
- In file
config_datarun_MarApr2019.py
, directcfg["observe"]["datadir"]
to the experiment data. - For each participant and stimulus condition:
- In
config_datarun_MarApr2019.py
, enterGROUNDTRUTH=
anddatadsl=
. - Run
$ python3 run.py config_datarun_MarApr2019
- Keep track of the data set labels (DSLs) linking experiment and simulation data, in a file similiar to
DSLs_predict_MarApr2019.py
.
- In
- Fit all observer models (for Fig. 3):
- Update the parameters section in
fit_noise_models_with_lapse_from_DSLfile.py
, especially:
exppath
,outFilename
, andimport
from your DSL file. $ python3 fit_noise_models_with_lapse_from_DSLfile.py
- Copy the
outFilename
file to the repository's/data/
directory.
- Bias-variance analysis (for Fig. 4):
- Update the parameters section in
estimate_bias_variance.py
, especially:
path_exp
,outfname_data
, andimport
from your DSL file. $ python3 estimate_bias_variance.py
- Copy the
outfname_data
file to the repository's/data/
directory.
- Plot the figures:
$ cd ../../plot
- Adjust
fname_data=
to point to your data inplot_fig_3.py
andplot_fig_4.py
. $ python3 plot_fig_3.py # Plot Figure 3
$ python3 plot_fig_4.py # Plot Figure 4
$ cd ..
$ source run_at_end.sh
data
: Experiment data and simulation/analysis resultspckg
: Python imports of shared classes and functionsplot
: Plotting scripts for Figures 2, 3 and 4pred
: Simulation and analyis scripts for the prediction taskrmot
: Simulation and analyis scripts for the rotational MOT task
If the 'Arial' font is not installed already:
$ sudo apt-get install ttf-mscorefonts-installer
$ sudo fc-cache
$ python3 -c "import matplotlib.font_manager; matplotlib.font_manager._rebuild()"
...and if you really want it all: the stars in Figure 3 indicating significance use font type "FreeSans".