This repository contains scripts and instructions to reproduce the results from Eelbrain, a toolkit for time-continuous analysis with temporal response functions.
If you're familiar with git, clone this repository. If not, simply download it as a zip file.
The easiest way to install all the required libraries is with conda, which comes with the Anaconda Python distribution. Once conda
is installed, simply run, from the directory in which this README
file is located:
$ conda env create --file=environment.yml
This will install all the required libraries into a new environment called eelbrain
. Activate the new environment with:
$ conda activate eelbrain
You will have to activate the environment every time you start a new shell session.
Download the Alice EEG dataset. This repository comes with a script that can automatically download the required data from UMD DRUM by running:
$ python download_alice.py
The default download location is ~/Data/Alice
. The scripts in the Alice repository expect to find the dataset at this location. If you want to store the dataset at a different location, provide the location as argument for the download:
$ python download_alice.py download/directory
then either create a link to the dataset at ~/Data/Alice
, or change the root path where it occurs in scripts (always near the beginning).
This data has been derived from the original dataset using the script at import_dataset/convert-all.py
.
In order to create predictors used in the analysis (and for some plots in the figures), execute the scripts in the predictors
directory (see Subdirectories below).
All TRFs used in the different figures can be computed and saved using the scripts in the analysis
directory. However, this may require substantial computing time. To get started faster, the TRFs can also be downloaded from the data repository (TRFs.zip). Just move the downloaded TRFs
folder into the ~/Data/Alice
directory, i.e., as ~/Data/Alice/TRFs
.
Note
Replicability: Due to numerical issues, results can differ slightly between different operating systems and hardware used. Similarly, implementation changes (e.g., optimization) can affect results, even if the underlying algorithms are mathematically equivalent. Changes in the boosting implementation are noted in the Eelbrain Version History.
Many Python scripts in this repository are actually Jupyter notebooks. They can be recognized as such because of their header that starts with:
# ---
# jupyter:
# jupytext:
# formats: ipynb,py:light
These scripts were converted to Python scripts with Jupytext for efficient management with git. To turn such a script back into a notebook, run this command (assuming the script is called notebook.py
):
$ jupytext --to notebook notebook.py
The predictors
directory contains scripts for generating predictor variables. These should be created first, as they are used in many of the other scripts:
make_gammatone.py
: Generate high resolution gammatone spectrograms which are used bymake_gammatone_predictors.py
make_gammatone_predictors.py
: Generate continuous acoustic predictor variablesmake_word_predictors.py
: Generate word-level predictor variables consisting of impulses at word onsets
The analysis
directory contains scripts used to estimate and save various mTRF models for the EEG dataset. These mTRF models are used in some of the figure scripts.
The figures
directory contains the code used to generate all the figures in the paper.
This directory contains the scripts that were used to convert the data from the original Alice EEG dataset to the format used here.
This tutorial and dataset:
Eelbrain:
Other libraries: