Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Info dump for flat field correction #31

Closed
miketaormina opened this issue Jun 13, 2023 · 2 comments
Closed

Info dump for flat field correction #31

miketaormina opened this issue Jun 13, 2023 · 2 comments

Comments

@miketaormina
Copy link
Collaborator

Purpose of thread

I recently learned that there are some misconceptions surrounding the intensity flat-field corrections that are currently done by the acquisition software. Since it is useful for computational people to know, and since we ultimately want to turn this off and do it ourselves downstream of acquisition, I thought I would put here what I know (or think I know) about LifeCanvas' implementation. This isn't meant to be an issue to actively work on yet, but there is no "discussions" tab to put it in.

Darkfield subtraction

What you would want
Normally, what you would want is something like an image acquired, at the time of acquisition, at the same exposure time, either with something physically blocking the light path to the camera, or perhaps just without excitation light present (there are some control hardware LEDs in the enclosure that may contribute and should perhaps just be better covered up). This would be quite cumbersome for the acquisition pipeline, but accounts for things like hot pixels and thermally driven variation of background level across the sensor.

What we have
Each microscope has a locally stored image file that is a uniform, single-valued array. This gets subtracted from each camera frame, regardless of whether you turn off the flat-field correction. The easiest way to disable this is to replace that file with one that is full of zeros.

Flat-field corrections

What you would (think that you) want
A frame acquired with the excitation light on in a uniform media (such as a diluted dye or the Cargille oil) for each color channel, for each excitation side.

What you probably should be satisfied with
Would be a 2-4 parameter cosine function accounting for decay of intensity in the vertical direction on the sensor for each color channel, for each excitation side. The parameters would account for:

  1. center
  2. width
  3. tilt
  4. left vs right excitation amplitude ratio

What we currently have
Is the above (without the tilt parameter), generated from a UI dialogue window with slider bars. These plus the darkfield subtraction are applied to the data as it is written during acquisition. I have been copying these arrays into the derivatives folder when transferring, as they get stored in a location that might not accompany the raw data.

@camilolaiton
Copy link
Collaborator

Hi @miketaormina, I'll be closing this issue here and opening it where we're applying shadow correction.

@camilolaiton
Copy link
Collaborator

Reopening this issue in the corresponding repo: New issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants