Skip to content

Working Memory Architecture and Demand model created by Montbretia Cabinet in NMA Computational Neuroscience course and Impact Scholars Program.

License

Notifications You must be signed in to change notification settings

saamehsanaaee/WMAD-Montbretia_Cabinet-ISP

Repository files navigation

Working Memory Architecture and Demand Model

This project is the result of work from the Montbretia Cabinet team during the Neuromatch Academy Computational Neuroscience Course and the Neuromatch Academy Impact Scholars Program.

For information on using this repository, please refer to this section.

Project Summary

Working memory (WM) allows us to temporarily hold information “online”, supporting higher cognitive functions such as decision-making, attention, and problem-solving. A key brain region responsible for WM is the prefrontal cortex, an area that differentiates us from other animals. Although our WM resources are limited, it adapts dynamically to the specific demands of different tasks.

Current study uses task-based fMRI data from a large-scale project, the Human Connectome Project, to investigate WM across a range of tasks and conditions. Task-based fMRI measures brain activation during WM tasks, where subjects are asked to recall images presented several trials before.

In our project, statistical and deep learning models acted as “sensors” for task demand and WM activity, predicting how much WM is involved in emotion and language domains. This work provides insights into WM theories from both computational and neurocognitive perspective, with significant implications for education and cognitive rehabilitation.

If you're more into videos, here's our presentation on Neuromatch's YouTube channel: Working Memory Involvement in Higher Cognition: Insights from fMRI Modeling

Also, since our micropublication is now live on the interwebs, here's the APA citation for your convenience:

Sanaaee, S., Mu, B., Tang, C. K.-M., Caslick-Waller, Z. R. G., Li, W., Gao, T., & Rodriguez Cruces, R. (2025). Parallel GNN-LSTM Model Predicting Working Memory Involvement during Language and Emotion Processing (1.0). Zenodo. https://doi.org/10.5281/zenodo.15126506

Please use this form of the citation when you want to cite us!


Meet the Team!

Member Where could you find them?
Baitong Mu ORCID, GitHub, LinkedIn
Carmen Tang ORCID, GitHub
Zeb Caslick-Waller GitHub, LinkedIn
Wendi Li ORCID
Tony Gao ORCID, LinkedIn
Raúl Rodriguez Cruces ORCID, GitHub
Saameh Sanaaee ORCID, LinkedIn, Bluesky

Data

Main providers of our data have been the Human Connectome Project (HCP), Neuromatch Academy (NMA), and Open Science Framework (OSF).

We are using the HCP data subset that NMA has provided during the NMA Computational Neuroscience (CN) 2024 course. The data is a 100-subject subset of the original HCP young adult, is accessible in Neuromatch OSF page, and downloadable through codes provided in the computational neuroscience course content from NMA.

The task-based fMRI data from HCP are time series Blood-oxygen-level-dependent (BOLD) signals that show brain activity through an increased blood flow and oxygenation to the active area of the brain. These signals are organized based on "regions" (see regions.npy section of the WMD Jupyter Notebook) corresponding to the 360 areas of the Glasser parcellation.

Running the notebooks in this repository

In terms of reproducitbity, all of our Jupyter Notebooks are self-containing notebooks that you can run independantly of other files.

The best way to run these notebooks would be to use Google Colab since it will give you a quicker data-access time without the hassel of downloading the files directly from OSF.

However, to make sure you have all dependencies installed, after ensuring you have Python and pip installed, please navigate to the main directory and install the requirements.txt file through the following command:

pip install -r ./notebooks/requirements.txt

This command will install all the necessary libraries listed in the requirements.txt file.


Using YAML with Conda

If you prefer using the environment.yml file with conda, please use this code. Prerequisites: Ensure you have Python, pip, and Conda installed on your system. Please refer to the official documentation for installation instructions:

Once you have Conda installed, navigate to the main directory containing the environment.yml file and create the environment using the following command:

conda env create -f environment.yml

In addition to this README.md, there are other markdown files provided in each of our folders that help you navigate the repository.

We should mention that many functions, especially those for setup and data acquisition, were adopted from the NMA-provided notebook (find it in the Data section). Other functions have detailed docstring documentations, providing sufficient information that help you with their use.

Note

This repository and its documentation will be updated throughout Spring and Summer of 2025 and code snippets for proper use of functions will be added. So, please be patient for the final iterations. For now, the notebooks can just be run from top to bottom as they are so you won't run into any issues if you're looking to use the notebooks. There have also been changes in the code and models that will be fully documented in the README.md files. In the meantime, please open an issue if you see a giant red flag. We would really appreciate that!


Models

Two different models were designed.

The Working Memory Demand (WMD) model, a Multilayer Perceptron (MLP), was created during the NMA CN course in the summer of 2024 as our preliminary (or proof of concept) model. Entering the Impact Scholars Program (ISP), we then expanded this model to create an "improved" model, called Working Memory Architecture and Demand (WMAD) model. WMAD is comprised of a pair of parallel models that merge through a fusion layer and are then passed to an MLP (with similar structure to WMD).

These two models give us insight into the spatial activity of working memory and its structure while fulfilling the role of a "task demand sensor".

WMD (Preliminary) Model

The WMD, constructed with dense and dropout layers that are organized into 4 paired-layers, is a classifier MLP that takes in 360 parcel-based average BOLD signals and generates two outputs. The outputs, probability values between 0 and 1 that correspond to the probability of the exposure being a high-demand task (number 0 corresponds to a task being a 0-back, low-demand task and number 1 would be a 100% probability of an exposure being a 2-back, high-demand task).

WMAD Model

The WMAD model builds off of the WMD. Main improvements of the WMAD model are use of time series instead of avergae BOLD signals, addition of GNN and LSTM to the model architecture, and higher interpretability of the results due to a more granular output. The output of the GNN-LSTM is similar to the preliminary model, but instead of a single probability value, it generates the probabilities for each of the 360 parcels. With this change, we can identify parcels that contribute the most to WM function. The GNN-LSTM model gives us insight into the spatial architecture of WM.

Generalization of WMAD

While the results from the WMD gave us valuable insight on how the MLP output could predict WM involvement in other tasks, it did so in a very general way. When we presented emotion and language data to WMD, it predicted the demand of task with 90% accuracy but without much information on the spatial activity.

Although this information on the demand of tasks like "arithmatic problem-solving" or "deriving context of a story" is very much valuable, we need a more quantitative look into these predictions. The GNN-LSTM provides parcel-based predictions that shows which parcels are working in "high-demand" mode. This information, along with the knowledge of "significant parcels of WM function" (derived through model interpretation) can show us how and where WM is involved in other tasks like emotion or language. And these predictions are just the start. HCP alone, provides many more tasks that are beyond intriguing when measured through this WM-involvement lens and that is the next step.

Acknowledgements

Data were provided in part by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.

In addition, we thank Neuromatch Academy (NMA) and the Open Science Framework (OSF) for providing the HCP 100-subject data subset for this project. Finally, we are incredibly grateful for Linzan Liu and Matin Yousefabadi and their support, mentorship, and advice during the NMA CN course during the summer of 2024.

About

Working Memory Architecture and Demand model created by Montbretia Cabinet in NMA Computational Neuroscience course and Impact Scholars Program.

Topics

Resources

License

Stars

Watchers

Forks

Contributors 5