Data should be organized in the BIDS-iEEG/BIDS-EEG format:
We will consistently obtain data from manufacturing, or EDF format. These files however, are not sufficient for us to analyze the scalp EEG data. We require additional metadata, which will be facilitated by following the BIDS protocol.
- Raw EDF data is initially to be stored in the <bids_root>/sourcedata/ folder.
- BIDS-conversion (source EDF data): Using mne-bids, we will convert data to BIDS format of the raw EEG data.
3. (Temporary - Matlab Auto-ICA) Since auto-ICA currently sits in EEGLab with the MATLAB module, we can run auto-ICA on the raw EDF data. During this, data is notch filtered, and bandpass filtered between 1-30 Hz (to remove higher frequency muscle artifacts). Then auto-ICA will try to remove stereotypical artifacts.
- ICA preprocessing is done in eeglab, where data is then output in the .set data format.
- episcalp/preprocess/matlab/run_me.m
4. BIDS-conversion (ICA-cleaned EDF data): Using mne-bids, we will convert ICA-cleaned-data to BIDS format of the raw EEG data. Data will be stored in EDF format.
- ICA preprocessed data will be written to <bids_root>/derivatives/ folder using mne_bids copy_eeglab function to convert to BIDS.
- episcalp/bids/run_bids_conversion_ica.py.
- Further analysis will either start from the raw data, or from the ICA preprocessed data.
- Feature generation code is stored in a subfolder of episcalp
- Scripts that assist in IO of intermediate results (specifically for notebooks) is located within sample_code
High level details that remain to be sorted out are the inclusion of Persyst spikes:
- should we perform Persyst spike detection before, or after ICA cleaning?
Setup environment from conda
# create conda environment conda create -n episcalp python=3.8 # activate it conda activate episcalp # install packages from environment.yml conda env update --file environment.yml
Setup environment from pipenv
# create virtual environment python3.8 -m venv .venv pipenv install --dev # if dev versions are needed pip install https://api.github.com/repos/mne-tools/mne-python/zipball/master pipenv install https://api.github.com/repos/mne-tools/mne-bids/zipball/master pip install https://api.github.com/repos/mne-tools/mne-connectivity/zipball/master # pip install for oblique random forests pip install https://api.github.com/repos/neurodatadesign/manifold_random_forests/zipball/master # install main version for autoreject pip install https://api.github.com/repos/autoreject/autoreject/zipball/master
If you're using some private repos, such as eztrack
, here's some helper code
for installing. You'll need to clone private repos locally and then install it manually.
# or pipenv install -e /Users/adam2392/Documents/eztrack # or if you're just using pip pip install -e /Users/adam2392/Documents/eztrack
Development should occur in 3 main steps:
- BIDS organization: any scripts to convert datasets and test BIDs-compliance should go into bids/.
- Spatiotemporal Heatmap Generation: Each type of analysis should have their own directory
- Analysis of Feature:
You need to install ipykernel to expose your environment to jupyter notebooks.
python -m ipykernel install --name episcalp --user # now you can run jupyter lab and select a kernel jupyter lab