-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What modifications can be done to optimally combine ASL signal instead of BOLD ? #1046
Comments
Hello @sboylan, why do you want to apply multi-echo with ASL data? AFAIK, in ASL one wants to use the shortest TE (minimize) to get the perfusion signal. Can you explain your case better? |
I am not familiar with multi-echo ASL generally. The only paper I'm aware of that discusses it is Mahroo et al. (2021). ExploreASL does support multi-echo ASL for blood-brain barrier perfusion, but I don't know much about it. If you could share more details about how the ASL (control, label, M0) signal curves would look and what, if any, differential effect you'd expect to see between them, that would be very helpful for understanding whether tedana could help. As an aside, I have recently started maintaining ASLPrep, so multi-echo ASL is particularly interesting to me. |
Certainly multi-echo ASL for BBB perfusion is very relevant!! |
Hello guys, thanks a lot for your inputs. We expect the CBF to change during activation, with a delay obviously. But I don't know any more. |
Yes, I am aware of that sequence. @smoia and I have already collaborated with the developers of the multi-echo ASL-BOLD sequence that you mentioned, and a joint paper is under review. |
Thanks for the input, there are probably some aspects that are not clear to me. |
Hello collaborators,
I have been looking around and trying to figure out how do it, but didn't find a simple way.
My Issue would be the continuation of issue #777 "Add minimally-preprocessed Cohen dataset to datasets module"
Summary
In Cohen's sequence, we use ASL to extract CMRO2 from BOLD data.
ASL is a method to measure the Cerebral Blood FLow (ASL), where we tag the blood in the neck on label volumes, to contrast it with control volumes to get the perfusion dataset cf :https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5794066/
ASL time-series has low tSNR, and currently, we only use one echo.
I am sure we can increase the tSNR of ASL data from such sequences if we can optimally combine the ASL data with all our echos, but I don't know which part of the code would be easier to change (decision trees, ICA selection... )
Additional Detail
There are several ways to compute the perfusion datasets : BOLD subtraction (label - control) or filtering BOLD data (high pass filter). I don't know if the BOLD ICA would be usable in such data.
Next Steps
The text was updated successfully, but these errors were encountered: