Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vu2024 - Fiber photometry conversion notes #2

Open
weiglszonja opened this issue Apr 23, 2024 · 0 comments
Open

Vu2024 - Fiber photometry conversion notes #2

weiglszonja opened this issue Apr 23, 2024 · 0 comments
Labels
documentation Improvements or additions to documentation

Comments

@weiglszonja
Copy link
Collaborator

weiglszonja commented Apr 23, 2024

Vu2024 Conversion notes

This dataset includes deep brain volumetric multi-channel optical data with multi-fiber arrays in head-fixed behaving mice.

Multi-color fiber array imaging

Fiber bundle imaging for head-fixed experiments was performed with a custom microscope mounted on a 4’ x 8’ x 12’’ vibration isolation table (Newport, Figure B). The imaging data was acquired using HCImage Live (Hamamatsu) and saved as a .cxd (movie) file.

Single wavelength excitation and emission was performed with continuous, internally triggered imaging at 30Hz. For dual-wavelength excitation and emission, two LEDs were triggered by 5V digital TTL pulses which alternated at either 11Hz (33ms exposure) or 18Hz (20ms exposure). To synchronize each LED with the appropriate camera (e.g. 470nm LED excitation to green emission camera), the LED trigger pulses were sent in parallel (and decreased to 3.3V via a pulldown circuit) to the cameras to trigger exposure timing. The timing and duration of digital pulses were controlled by custom MATLAB software through a programmable digital acquisition card (“NIDAQ”, National Instruments PCIe 6343 ). Voltage pulses were sent back from the cameras to the NIDAQ card after exposure of each frame to confirm proper camera triggering and to align imaging data with behavior data (see below).

Reading .cxd files

To open these files I'm using BioformatsReader from AICSImage.

python
from aicsimageio import AICSImage
img = AICSImage(cxd_file_path)
img.shape

outputs (40000, 1, 1, 375, 376) with the dimensions order of time, number of channels, number of planes, y (height) and x (width).

This file however doesn't seem to contain information about the start time of the recording.
It has a field in ome_metadata.images[0].acquisition_date but it's not a required field and has not been set for these examples.

Example frame from cxd file:
Screenshot 2024-04-23 at 16 06 31 (2)

Preprocessing steps

All neural data were preprocessed using the scripts in https://github.com/HoweLab/MultifiberProcessing

  1. Raw neural data are acquired as movies and have filetype .cxd
  2. These movies are converted to .tif, and then motion-corrected.
  3. Fluorescence is extracted from ROIs corresponding to fiber tops, and delta-F-over-F is calculated.
  4. The resulting preprocessed data is a .mat struct with the following fields:
    *ROIs: the centers of the ROIs
    *datapath: the path to the associated .tif file
    *snapshot: a snapshot of a frame from the .tif movie
    *radius: the radius of the ROIs
    *ROImasks: an m x n x p matrix of p binary ROI masks
    *FtoFcWindow: the window used to calculate baseline
    *F: the extracted raw fluorescence
    *Fc: the calculated ΔF/F
    *Fc_baseline: the calculated baseline
    *Fc_center: the calculated center, which becomes 0

References

@weiglszonja weiglszonja mentioned this issue Apr 23, 2024
17 tasks
@weiglszonja weiglszonja added the documentation Improvements or additions to documentation label Apr 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

1 participant