This is my repo for my project of MONTREAL BRAINHACK SCHOOL from 5th to 30th August 2019.
- __GOAL: Classify EEG task-related single trials and functional connectivity using machine learning tools like MNE library.
- THEORY: check my OHBM Poster of the project
- RAW DATA: Scalp EEG data - Biosemi - 512 Hz - 64 electrodes - 50 healthy humans (.bdf)
- TASK: Visuo-spatial attention task (about 250 trials per Main condition per subject)
- For ERP: On continuous signal (Raw, .bdf), blinks and artefacts filtered and then segmented on ERPLAB/EEGLAB (.set + .ftd)
- For wPLI: On continuous signal, SCD applied (Raw, .bdf), blinks and artefacts filtered, 14 electrodes selectionned, Beta and Gamma filtered and Hilbert transform applied and wPLI (.erp), and then 10 ICA (connectomes) (.mat)
- Data dimension structured as epochs, to be compliant with Python process (initially EEGLAB/MATLAB)
- THE PROBLEM IN WORDS: Each epoch, as a voltage signal (ERP) or a feature weight (ICA), will be the input to a two-state classifier (attended vs ignored). The performance of the classifier will provide a multivariate analysis showing in what time periods the features support classification, and which contribute more.
- ALGORITHM: Train a regression model (or a LDA, or SVM model) to classify trials in 2 groups.
- Identify which connectome(s) and during which time course is relevant to this attending / memorizing process.
- --> Find a performance values (accurancy...) publishable !! and a nice vizualization !!!!
- DATASET (.set): The analyses will use the Hilbert transform data of each channel of relevant link (/91).
- PREPROCESS: Identify a link from a connectome with Beta/Gamma relevant for differentiating the 2 modes
- ANALYSE: Extract Granger causality value on (1) Gamma band and on (2) Beta band to identify if we corroborate Pascal Fries model of feedforward and feedback influence.
- Settle my Environnement on MY COMPUTER: Visual Box + Ubuntu + Pyhton 3.6 + MNE + Jupyter.
- Settle my Environnement on POWERFUL REMOTE CUMPUTER in my lab
- Arrange / preprocess the first DATASET (.set)
- Find the right architecture and algorithme (Yes a Linear Regressor !...) for classifier
- Find a nice visualization of results
- [X ] Create a video explaining my project
- [X ] Create a Jupyter Notebook with the code of a classifier
- Extract the stats
- Use matplotlib for visualizing features
- [] Use Seaborn for visualizing features