You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Users have requested that we find a way to track multi-session quality control in the main schema, rather than some kind of extension.
Short explanation for why this is needed:
A user has ten assets that all do their own raw/processing qc and pass and then they need to compare them somehow, e.g. matching ophys fov
They run a multi-session capsule that generates evaluations/metrics to check whether the fov is the same. This is a blocking step because the annotation is manual
Someone annotates the metrics and marks that one asset has a different fov, it needs to be dropped for further multi-session analysis
A second multi-session is now run that needs to pull only the assets with the same fov. To match metrics -> assets the metrics need to track which asset they were generated from
The text was updated successfully, but these errors were encountered:
There's no easy ability to generate isolated data assets the way I had hoped for, so dumping the quality_control.json isn't that simple, people are going to have to use the data-access-api to do this, with some limited permissions
We need to stratify QC status based on local vs global scope and include whether evaluations have multiple upstream dependencies
We need to mark evaluations (or metrics?) as having external dependencies, and possibly list their dependency structure using the "name" fields
Users have requested that we find a way to track multi-session quality control in the main schema, rather than some kind of extension.
Short explanation for why this is needed:
The text was updated successfully, but these errors were encountered: