Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Statistical Model for SDI - first level and second level #66

Open
10 tasks done
venkateshness opened this issue Aug 15, 2023 · 0 comments · May be fixed by #73
Open
10 tasks done

Statistical Model for SDI - first level and second level #66

venkateshness opened this issue Aug 15, 2023 · 0 comments · May be fixed by #73
Assignees
Labels
Enhancement New feature or request

Comments

@venkateshness
Copy link
Collaborator

venkateshness commented Aug 15, 2023

Set up within- and between-subjects statistical tests, alongside the correction for multiple comparison

Detailed Description

First, check the consistency of the effects at the trials/events (intra-subject level); second, the group-level (inter-subject) model followed by the permutation-based correction for the multiple comparison to identify the strength of the effects observed in the empirical SDI comparing against surrogate SDIs across and within subjects

Context / Motivation

Trials/events-level SDI needs to be looked into for its consistency before the group-level; this hierarchical stat model handles that

Possible Implementation

Set up the code for;

  1. Make a distribution from the Empirical and Surrogate SDIs using Signed Rank
  2. Perform one-sample ttest to test the SDI in the trials/events against null
  3. Perform massive univariate tests among subjects, and subsequently correct for multiple comparisons using permutations
  • Reimplement the code for Scipy's signed rank test for natively including it in NiGSP
  • Run tests and reproduce the original results; checks out okay ?
  • Reimplement the code for MNE's ttest_1samp_no_p for native inclusion
  • Run tests and reproduce the original results; checks out okay ?
  • Figure out if reimplementing MNE-python's permutation_t_test is a viable option; EDIT: many local imports; making it all native make the code convoluted & hard to read and follow, better to keep mne as optional requirement for stats
  • Final tests pass ?
  • Moderate and quick refactoring the code
  • Polishing the docstring
  • Run and test locally
  • Automated tests
@venkateshness venkateshness added Enhancement New feature or request Minormod This PR generally closes an `Enhancement` issue. It increments the minor version (0.+1.0) labels Aug 15, 2023
@venkateshness venkateshness self-assigned this Aug 15, 2023
@venkateshness venkateshness linked a pull request Aug 20, 2023 that will close this issue
18 tasks
@smoia smoia removed the Minormod This PR generally closes an `Enhancement` issue. It increments the minor version (0.+1.0) label Aug 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants