You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In principle could be done via ContentUrls at the level of dandischema and thus centralized at the archive level.
But first the idea came up in context of datalad, might be easier to "implement" via datalad which has hooks into various other data portals. It was inspired by questions about data longevity we received today from ncsu.edu .
We could allow people to report availability of data in form of
dictionary/list of sha256 checksum: URL
URL to any "git remote" compatible git/git-annex remote (e.g. on gin, or on any git-annex special remote via git datalad remote)
then we could easily register those within datalad dandisets we have. In principle some of that information could even be reflected in metadata records for assets (if direct urls) or dandisets (full remotes, so individual urls might be tricky/impossible).
not quite "IPFS" but not intended to be ;)
might be trickier/too cumbersome for collections of zarr dandisets but still doable
In principle could be done via ContentUrls at the level of dandischema and thus centralized at the archive level.
But first the idea came up in context of datalad, might be easier to "implement" via datalad which has hooks into various other data portals. It was inspired by questions about data longevity we received today from ncsu.edu .
We could allow people to report availability of data in form of
then we could easily register those within datalad dandisets we have. In principle some of that information could even be reflected in metadata records for assets (if direct urls) or dandisets (full remotes, so individual urls might be tricky/impossible).
WDYT @satra ?
The text was updated successfully, but these errors were encountered: