-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detrending & tedana (or detrended variance explained) #1054
Comments
IMHO, detrending prior to tedana might be helpful because IC explaining high % of variance do often exhibit low frequency trends very clearly. In our datasets we usually find (two) related components with opposite trends. In contrast, I don't think regressing out realignment parameters prior to tedana would be advantageous. If so, I'd advocate for the use of the realignment parameters in the decision tree as @eurunuela has aimed to implement. A bigger movement-related tree is easier to see in the brain forest !! |
My top priorities are to get realignment parameters into the decision tree (#1021) and get a more stable component estimation (likely #1013). Unless someone else gets to it first, running tedana on detrended data should be fairly easy. One would need to run polynomial detrending on all echoes, but keep the mean. Without any code edits, tedana could be run with and without this detrending to see how it alters the eventual denoised time series. |
Note that detrending or other regression should happen in the context of censoring, at least if time gets squeezed. If modularity is important, censoring with spike regressors might be preferable to time squeezing. |
I agree that detrending should be done to the OC input |
To make the metric fits work, you'd need to detrend the separate echoes (but retain the mean). To calculate kappa & rho, the component time series are fit to each echo's data. If the OC is detrended, but the echos aren't then those calculations would definitely break down. That said, we might be losing or corrupting some echo-dependent information if we separately detrend each echo. One intermediate option would be to fit the polynomial detrending regressors either the OC data or all echoes, and then fit a single detrending regressor to all echoes. That is, for the OC data, a voxel's detrending regressors are modeled by I'm not sure if this would actually matter or what the effects of any detrending approach would be, but we might get a better sense when someone tries it and compares the results. |
Does the relationship between the echoes still matter after OC is computed? How BOLD-like a time series looks could be evaluated before OC (and therefore before detrending, etc). At that point, wouldn't it just be a question of variance explained? And if so, the relationship would no longer be needed and the echoes could be detrended. |
ICA is the limiting step. The ICA component times series need to be map-able back onto the original echoes. If the OC data that go into the ICA step are detrended, but the individual echoes are not, then this breaks down. That said, this goes back to my opening comment. If the goal is to calculate variance explained excluding linear trends, that should be relatively easy. If the goal is to run ICA on detrended data, then there are more complex issues to think through. |
My somewhat ignorant inclination would be to run ICA on the detrended data. Echo/BOLD relationships could be computed separately, even if it is necessary to have both original and detrended echoes. But it seems that the ICA could be made to be far more sensitive to properties of interest if it were not "distracted" by components such as trends or censored spikes. |
The |
Summary
I was talking with @afni-rickr and he suggested detrending and possibly regressing out motion parameters before tedana. This would reduce the amount of variance tedana would need to model and potentially make it perform better.
Additional Detail
In thinking this over, I think regressing motion would be problematic because we'd need to regress from all echoes if motion artifacts didn't all follow the regressors, it might not actually save degrees-of-freedom or improve other results. I think there's a better case for detrending. I'm not too worried about high variance trends making tedana perform worse, but they do make it harder to interpret total variance explained and accepted/rejected variance explained since the magnitude of the linear drift dominates total variance explained. This make it hard to see whether rejected components are 30% of the meaningful total variance in one population vs 50% in another.
We might want to test of tedana results are substantively different if we detrend first. If not, one way to address the above issue would be to add a new metric
detrended variance explained
. I'd need to think through the math, but we'd detrend each component's time series and scale each components "variance explained" by how much detrending reduced variance. IF we want to get extra-fancy, we could show the variance explained pie chart and component time series with or without detrending.The text was updated successfully, but these errors were encountered: