Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition
This repository contains all the code for preprocessing the raw EEG and fMRI data of the bimodal Inner Speech Dataset. The dataset is publicly available in OpenNeuro: https://openneuro.org/datasets/ds004197
Simistira Liwicki, F. et al. "Bimodal dataset on Inner speech". OpenNeuro https://doi:10.18112/openneuro.ds004197.v1.0.2 (2022)
The publication is available as a preprint at biorxiv: https://www.biorxiv.org/content/10.1101/2022.05.24.492109v3
The EEG and fMRI modalities use the same experimental protocol developed with ePrime.
SPM12 was used to generate the included .mat files
File = preprocessing.mat
- Calculate VDM
- Inputs: Phase Image, Magnitude Image, Anatomical Image, EPI for Unwrap
- Outputs: Voxel Displacement Map
- Parameters: Echo times [4.92 7.38], Total EPI readout time Echo spacing * number of echos = 33.92 No brain masking
- Realign and Unwrap
- Inputs: All images of the session, Voxel Displacement map
- Output: uCMRR
- Slice Timing
- Input: Output of previous stage
- Parameters: TR = 2.16, Number of slices = 68
- Output: auCMRR
- Coregister: Estimate Only
- Inputs: Reference Image – Mean unwarped image, Source Image- Anatomical Image
- Output: Coregistered Anatomical Image (rT_)
File = segmentNormalise.mat
- Segmentation:
- Input: realigned anatomical image (rT_)
- Parameters: Forward Deformation
- Output: y_ prefix
- Normalisation to MNI Space
- Input: Deformation Field: y_ file from segmentation step, Images to Write: auCMRR
- Output: wauCMRR
- Smoothing
- Input: wauCMRR
- Parameters: Gaussian Kernel [8 8 8]
- Output: swauCMRR
Run firstlevel.mat
for each subject and session separately to produce the beta images and contrasts. Input to this should also include the rp_ movement parameter files. Researchers may choose to use the beta images produced from this first level step for decoding, or the swauCMRR or wauCMRR for decoding of inner speech.
Run secondlevel.mat
to compute the group level statistics.
To assess the framewise displacement for technical validation, use the FWD_script.py
. Inputs for this should include the rp_ movement parameter files. The output is a plot that can be saved to file.
General Information:
- In anat folder anatomical images are present
T_subXX.nii
- Phase images end with
XXXXX_ph.nii
and is available in GRE7 for Session 1 and GRE6 for session 2 - Magnitude images are found in GRE6 and GRE5 for both sessions respectively. There are 2 magnitude files (
XXXX_E1.nii
andXXXX_e2.nii
) in each folder. We useXXXX_e1.nii
- The functional images (EPI for Unwrap) is available in CMRR folder across al sessions in all subjects. They vary from 1039 to 1100 so extra attention must be paid for mapping
EEGLAB is an open source signal processing environment for electrophysiological signals running on Matlab and Octave. Download it from the official site https://sccn.ucsd.edu/eeglab/download.php
-
Start
Matlab
and use it for navigating to the folder containingEEGLAB
. -
Type
eeglab
at the Matlab command prompt and press enter. You can see one pop out window which shows all the functions and modules available here in EEG. -
Now, we start preprocessing the data. The full description and documentation of the pipeline stepwise has been written in
preprocessing_pipeline.m
-
The
preprocessing_pipeline.m
will import the data (here it is in .bdf format) and do the preprocessing steps like filtering of data, referencing, removal of noise by ICA decomposition etc. -
During the ICA decomposition part, we delete the components which correspond to motor-related artifacts such as blinking, jaw, neck, arm, or upper back. We performed this step by manual observation the scores shown by the ICA decomposition tool.
**Note: documentation is included in the scripts.
@article {Simistira Liwicki2022.05.24.492109,
author = {Simistira Liwicki, Foteini and Gupta, Vibha and Saini, Rajkumar and De, Kanjar and Abid, Nosheen and Rakesh, Sumit and Wellington, Scott and Wilson, Holly and Liwicki, Marcus and Eriksson, Johan},
title = {Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition},
elocation-id = {2022.05.24.492109},
year = {2022},
doi = {10.1101/2022.05.24.492109},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2022/11/30/2022.05.24.492109},
eprint = {https://www.biorxiv.org/content/early/2022/11/30/2022.05.24.492109.full.pdf},
journal = {bioRxiv}
}