You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Austin has been collecting data from some mice that have a dopamine sensor expressed in the brain. The plan is to do bulk fluorescence measurements which effectively means to take the whole image, average it, and then take the average value of the pixels in the image and turning that into a number that's written to disk. Some considerations after talking with Jianna and Caroline today:
Motion correction is likely unnecessary, at least at a first pass. And what motion correction would even look like for things that don't have any cells visible is a whole different question...
2P Data is quite noisy from shot noise in the detectors and the signal from the sensor itself is apparently messy, or at least that's how its been described to me. I wonder what the sensor data will reliably mean since I haven't seen people collect this kind of information before using point scanning scopes like this so I'm not sure how comparable it is to fiber photometry recordings for example.
We could take the tiffs, turn it into a temporary zarr store, take averages of the frames' pixel intensities in parallel with dask, and then output some value of fluorescence against a time stamp or maybe even just a frame number (where the timestamp could be inferred from the framerate encoded in the environment xml) that can be used as data against the trial information later.
One way of doing it in Dask with zarr would probably look something similar to this, but might not work straight out of the box with what I'm thinking...
I think we could start by having tiffs put into an h5 file maybe using this without the lzf compression since I don't think MATLAB supports it. In fact using no compression at all is probably fine (for now...). This H5 file could be run with pyimagej or MATLAB or whatever to just take the average of each frame and spit out that number into a file at the end.
I think the cooler way to do this would be with these steps (and more towards a more complete pipeline for generating H5/zarr from the bruker more generally) but it will maybe be harder/take more work 👀
take tiffs generated after ripping on the scratch space and put them into a zarr store
use dask to map_blocks a numpy average call and spit out that result to a file of some kind (csv would be fine probably...)
then map those blocks to generate an h5 file
once h5 file is complete, delete the tiff directory and move the h5 file to the server
The text was updated successfully, but these errors were encountered:
The initial work for this has been done and we get some kind of number for each frame from a 56k image recording in about 40 seconds (on Cheetos) which is pretty cool. Writing all the images to H5 takes like 3 minutes.
The next step is to both make sure the zarr store to h5 produces images you can actually look at (at first it looks like the images are corrupted or something in the h5 file) in imagej's N5 viewer. Then actually cleaning up the code and making it into a real module we can invoke for the recordings.
Some things in bruker_control should also be done to make sure the dopamine sensor is appropriately named in the filename because I don't think it currently is... I should make an issue for that...
Austin has been collecting data from some mice that have a dopamine sensor expressed in the brain. The plan is to do bulk fluorescence measurements which effectively means to take the whole image, average it, and then take the average value of the pixels in the image and turning that into a number that's written to disk. Some considerations after talking with Jianna and Caroline today:
One way of doing it in Dask with zarr would probably look something similar to this, but might not work straight out of the box with what I'm thinking...
I think we could start by having tiffs put into an h5 file maybe using this without the
lzf
compression since I don't think MATLAB supports it. In fact using no compression at all is probably fine (for now...). This H5 file could be run with pyimagej or MATLAB or whatever to just take the average of each frame and spit out that number into a file at the end.I think the cooler way to do this would be with these steps (and more towards a more complete pipeline for generating H5/zarr from the bruker more generally) but it will maybe be harder/take more work 👀
map_blocks
a numpy average call and spit out that result to a file of some kind (csv would be fine probably...)The text was updated successfully, but these errors were encountered: