-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Goals and Strategy #1
Comments
I wasn't planning to look into preprocessing pipeline or data quality/acquisition parameters. Granted, that's mostly because I was limiting the scope of this analysis because I didn't have much time to dedicate to it. If you and @dowdlelt are planning to spend some time on this, then I think those can definitely fall within the scope of the project. In terms of metrics, some additions to your own that I think would be useful are:
|
Can't speak for @dowdlelt but I think it's worth doing the work to take that into account, and I suspect I have the time. These additional metrics sound good to me. |
I like those metrics, though there is an obvious difficulty with resting state data for which no model exists. Perhaps for resting state scans we could use a seed region, corresponding to a typical resting state node as the analyses method. Should get reasonable maps of a resting state network that way, and be able to generate similar voxel significance maps. Maybe a couple different seed regions... |
Yeah, a seed-to-voxel analysis with a common seed is a good option for doing this with resting-state data. What about one seed for each of the major canonical networks? E.g., default mode (PCC), executive control (dmPFC), and salience (anterior insula). |
One potential issue with group comparisons across seeds is that using the same seed for different data (e.g., different subjects) doesn't mean anything. If we compare group-level maps from one seed to the next, we have a combination of within-subject variability due to seed, between-subject variability due to the data, and between-subject variability due to seed. That's probably not the most accurate way of describing it, but I think the logic is sound. I figure that we have three possible solutions:
|
I would like to flesh out the analysis plan in a Google Doc, but before I start on that I want to ensure that we've figured out the cross-seed comparison issue. Would everyone please make sure to take a look at #3 and weigh in there? Addendum: We also need to figure out the issue above (group comparisons when seed doesn't matter). |
Per the idea of predicting whether convergence issues can be predicted, two additional measures came to mind during the phone call: TSNR of the data In addition, thiking out loud on others: sampling rate, voxel size (which would certainly relate the smoothness), type of head coil (num channels perhaps)? |
I'm opening this issue so that we can discuss the goals and strategy for this analysis. I'd like for this issue to help us outline the goals and strategy. I would like to propose the below:
Goals:
Strategy:
Apologies to @tsalo if this was discussed elsewhere or is in the code; I took a look and I see that you have some setup for fmriprep and running tedana already, but I would like to investigate the afni_proc preproc as well.
The text was updated successfully, but these errors were encountered: