-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should we use NIDM-Results? #7
Comments
I think it would be interesting to explore this. Indeed NIDM-Results format allows to store both contrast images and error maps. The downside is that it uses technologies with a bit of a steep learning curve. |
Yeah... after looking at the different repositories |
I wrote a very simple notebook here to read in a folder containing a collection of NIDM-Results packs (in this case the 21 pain studies collection from NeuroVault) and convert it to a NiMARE-compatible json file. It might be useful jumping-off point for a future version of the NeuroVaultExtractor we plan to implement in NiMARE. |
AFAIK NIDM-Results does not make any guarantees about file naming conventions. To get the necessary files you need to parse the @cmaumet might know about python functions to help with parsing ttl. There is also some code in NV for this https://github.com/NeuroVault/NeuroVault/blob/c2975e35bec13dc36d694ac9894b57935b83627b/neurovault/apps/statmaps/nidm_results.py |
Oh... darn. Thanks for the heads up, though, and the NeuroVault code looks like it could be very helpful. I hadn't come across it before. |
Hi @tsalo! I am sorry your experience with NIDM has not been great so far.
Can I get back to you when I have a first version of the reading implemented in the nidmresults library? I would love to get feedback! More generally, if you have recommendations on how we could make the overall NIDM documentation more welcoming that would be very much appreciated. The most updated info is located in the README of the nidm-specs repository: https://github.com/incf-nidash/nidm-specs. Thank you! @chrisfilo: thanks a lot for pinging me! |
Hi @cmaumet. Thank you so much. NIDM is extremely cool, it just seems to have a steep learning curve. I am planning to become more familiar with the specifications and tools, so I would be happy to provide feedback on whatever stumbling blocks I encounter in the documentation as I learn. I didn't realize nidmresults was meant to read the Results packs as well as write them, but that's great to hear. I can definitely wait until the reading functionality in nidmresults is ready, and will be excited to use it in NiMARE when that happens. |
I've pushed this to neurostuff/pyNIMADS#1. This way we can still work on a NIMADS module prior to splitting it off into its own package, but we don't need to support NIDM-Results until it is its own package. |
* Full coordinate shuffle-based empirical null. * Cleanup. * Switch empirical with full empirical. * Run black. * Add n_cores to MKDA methods. * Fix null call. * Update tests. * Switch to analytic nulls in examples. * Use analytic null for decoding test. * Fix test. Very annoying. * Change MKDA default null_method to analytic. * Reduce test burden for empirical null. * Improve histogram generation for KDA This is *not* compatible with continuous kernels like ALE. I may push a workaround for those later. * Support continuous kernels with KDA. * Reformat. * refactor tests to included reduced_empirical and test sensitivity/specificity separately (#7) Co-authored-by: James Kent <jamesdkent21@gmail.com>
NIDM-Results objects contain all of the information we could need from contrasts for NiMARE, plus the tools for parsing those objects (i.e., PyNIDM) seem pretty useful. I don’t know how stable the specification or the tools are (it looks like both are under constant development), but it seems like we could build the NiMARE database/dataset classes around the NIDM-Results objects.
I also don’t know whether it's possible or not to build the Results objects with partial information (e.g., just coordinates and metadata), but NeuroVault supports them and we could request that Brainspell output its database in that format as well. That should make it easier to download and merge everything, not to mention it would automatically link SE and contrast images for IBMAs (which has been something that's been vexing me).
The text was updated successfully, but these errors were encountered: