You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently learned that there are some misconceptions surrounding the intensity flat-field corrections that are currently done by the acquisition software. Since it is useful for computational people to know, and since we ultimately want to turn this off and do it ourselves downstream of acquisition, I thought I would put here what I know (or think I know) about LifeCanvas' implementation. This isn't meant to be an issue to actively work on yet, but there is no "discussions" tab to put it in.
Darkfield subtraction
What you would want
Normally, what you would want is something like an image acquired, at the time of acquisition, at the same exposure time, either with something physically blocking the light path to the camera, or perhaps just without excitation light present (there are some control hardware LEDs in the enclosure that may contribute and should perhaps just be better covered up). This would be quite cumbersome for the acquisition pipeline, but accounts for things like hot pixels and thermally driven variation of background level across the sensor.
What we have
Each microscope has a locally stored image file that is a uniform, single-valued array. This gets subtracted from each camera frame, regardless of whether you turn off the flat-field correction. The easiest way to disable this is to replace that file with one that is full of zeros.
Flat-field corrections
What you would (think that you) want
A frame acquired with the excitation light on in a uniform media (such as a diluted dye or the Cargille oil) for each color channel, for each excitation side.
What you probably should be satisfied with
Would be a 2-4 parameter cosine function accounting for decay of intensity in the vertical direction on the sensor for each color channel, for each excitation side. The parameters would account for:
center
width
tilt
left vs right excitation amplitude ratio
What we currently have
Is the above (without the tilt parameter), generated from a UI dialogue window with slider bars. These plus the darkfield subtraction are applied to the data as it is written during acquisition. I have been copying these arrays into the derivatives folder when transferring, as they get stored in a location that might not accompany the raw data.
The text was updated successfully, but these errors were encountered:
Purpose of thread
I recently learned that there are some misconceptions surrounding the intensity flat-field corrections that are currently done by the acquisition software. Since it is useful for computational people to know, and since we ultimately want to turn this off and do it ourselves downstream of acquisition, I thought I would put here what I know (or think I know) about LifeCanvas' implementation. This isn't meant to be an issue to actively work on yet, but there is no "discussions" tab to put it in.
Darkfield subtraction
What you would want
Normally, what you would want is something like an image acquired, at the time of acquisition, at the same exposure time, either with something physically blocking the light path to the camera, or perhaps just without excitation light present (there are some control hardware LEDs in the enclosure that may contribute and should perhaps just be better covered up). This would be quite cumbersome for the acquisition pipeline, but accounts for things like hot pixels and thermally driven variation of background level across the sensor.
What we have
Each microscope has a locally stored image file that is a uniform, single-valued array. This gets subtracted from each camera frame, regardless of whether you turn off the flat-field correction. The easiest way to disable this is to replace that file with one that is full of zeros.
Flat-field corrections
What you would (think that you) want
A frame acquired with the excitation light on in a uniform media (such as a diluted dye or the Cargille oil) for each color channel, for each excitation side.
What you probably should be satisfied with
Would be a 2-4 parameter cosine function accounting for decay of intensity in the vertical direction on the sensor for each color channel, for each excitation side. The parameters would account for:
What we currently have
Is the above (without the tilt parameter), generated from a UI dialogue window with slider bars. These plus the darkfield subtraction are applied to the data as it is written during acquisition. I have been copying these arrays into the
derivatives
folder when transferring, as they get stored in a location that might not accompany the raw data.The text was updated successfully, but these errors were encountered: