Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First call to calibrate functions doesnt work. #19

Open
iandobbie opened this issue Jun 9, 2021 · 9 comments
Open

First call to calibrate functions doesnt work. #19

iandobbie opened this issue Jun 9, 2021 · 9 comments

Comments

@iandobbie
Copy link
Member

The system flat (or my new calibrate_remote_z ) returns every actuator at 0.5 on the first call. subsiquent calls appear to succeed

@iandobbie
Copy link
Member Author

I think this is due to broken logic in the flatten phase routine. aoDev.py line 784:

    while (iterations > ii) or (best_error > error_thresh):

so it continue while Either interations ((default 10) > ii (the count variable)) or (best_error> error_thres).
If you put in a low error threshold this will never finish. I think this needs to be
while (iteration>ii) and (best_error>error_thres)

I think this code only worked at all because the default error_threshold was np.inf and basically the second check always produced false.

There seems to be another issue in the improvement code as well, around line 816:

  if corrected_error < best_error and corrected_error < current_error:

correct_error is the current effort, best_error is the previous best correction, current_error is the previous best pattern - calculate correction factor. My understanding of this code is if your applied correction is BETTER than your calculated correction factor then it will never be accepted, hence the optimisation very often fails, and even if it is a scucess it will not finish with the lowest error, just the time you did better than you thought it ought to.

@iandobbie
Copy link
Member Author

I think I am reading the current_error wrong, it appears to be the previous best corrections minus the currently used zernike modes, so effectively the theoretical best you could do with the currently selected modes, however this ignores noise, crosstalk etc... why cant we occasionally do better? If we do why should we throw away that coprrection?

@iandobbie
Copy link
Member Author

Another howler:
elif corrected_error < best_error:
_logger.info("Wavefront error worse than before")

Really? If the corrected error is less than the best error it is worse than before?

@NickHallPhysics
Copy link
Collaborator

NickHallPhysics commented Jun 22, 2021

I think this is due to broken logic in the flatten phase routine. aoDev.py line 784:

    while (iterations > ii) or (best_error > error_thresh):

so it continue while Either interations ((default 10) > ii (the count variable)) or (best_error> error_thres).
If you put in a low error threshold this will never finish. I think this needs to be
while (iteration>ii) and (best_error>error_thres)

I think this code only worked at all because the default error_threshold was np.inf and basically the second check always produced false.

while (iterations > ii) or (best_error > error_thresh): is intentional, not a bug. My logic was that if a user has specified an error threshold then that should always be the condition of termination. If that threshold is reached before the number of iterations a user has specified for the flatten routine to run for has elapsed, then the routine should continue to run in order to get the best possible "wavefront flatness profile". Hence the or rather than an and conditional.

It's reasonably to argue that if a user has specified a set number of iterations then the routine should only ever run for that many iterations, but at that point I question why there should be an error threshold at all if the prime decision factor is the current number of elapsed iterations. Happy to discuss this further.

There seems to be another issue in the improvement code as well, around line 816:

  if corrected_error < best_error and corrected_error < current_error:

correct_error is the current effort, best_error is the previous best correction, current_error is the previous best pattern - calculate correction factor. My understanding of this code is if your applied correction is BETTER than your calculated correction factor then it will never be accepted, hence the optimisation very often fails, and even if it is a scucess it will not finish with the lowest error, just the time you did better than you thought it ought to.

So, there are 3 variables here:

  1. best error = the best RMS wavefront error from the whole flatten routine up to this point.
  2. current error = this is the RMS wavefront error measured before the flattening of this current iteration of the flatten routine
  3. corrected error = this is the RMS wavefront error measured after the flattening of this current iteration of the flatten routine

The conditional if corrected_error < best_error and corrected_error < current_error: is therefore checking the RMS wavefront error after correction is better than it was before the correction and better than any other correction so far applied. Perhaps the variable names should be improved to make this more easily understood, but I can't find any error in the code which would imply that this conditional is flawed.

Another howler:
elif corrected_error < best_error:
_logger.info("Wavefront error worse than before")

Really? If the corrected error is less than the best error it is worse than before?

This is incorrect. It should be elif best_error < corrected_error

@NickHallPhysics
Copy link
Collaborator

I think I am reading the current_error wrong, it appears to be the previous best corrections minus the currently used zernike modes, so effectively the theoretical best you could do with the currently selected modes, however this ignores noise, crosstalk etc... why cant we occasionally do better? If we do why should we throw away that coprrection?

You are reading this wrong. current_error is the wavefront error measured before the correction is performed for the current iteration of the flatness routine. Line 783:

correction_wavefront_mptt = correction_wavefront - aotools.phaseFromZernikes(z_amps[0:3], x)

Measures the current wavefront (before correction) and subtracts the piston, tip and tilt measured in the current wavefront, since those Zernike modes are not reliable, and piston in particular is arbitrary and varies from wavefront to wavefront thereby biasing wavefront error calculations. Line 784:

current_error = self._wavefront_error_mode(correction_wavefront_mptt)

Then calculates the wavefront error according to the specified _wavefront_error_mode which is by default the RMS wavefront error, although a Strehl ratio calculation is implemented as well.

The flattening routine is then performed for this iteration and this current_error is compared to the error after flattening correction i.e. corrected_error to verify that the flattening has yielded an improvement in wavefront error.

@NickHallPhysics
Copy link
Collaborator

NickHallPhysics commented Jun 22, 2021

The system flat (or my new calibrate_remote_z ) returns every actuator at 0.5 on the first call. subsiquent calls appear to succeed

This is entirely possible if the routine is running correctly. If at no point during the flattening routine do you obtain a better wavefront error than is obtained with 0.5 applied to all actuators, then those are the value which will be saved as the system flat values because those are indeed the actuators which yielded the best wavefront error.

I would suspect this occurs due to a poor control matrix that has significant co-variance in the Zernike modes, meaning applying one Zernike mode introduces significant amplitudes of other Zernike modes.

The whole control matrix might not be poor. It may be that you are attempting to correct for some modes which have a low reconstruction accuracy/significant co-variance with other modes (typically Noll index > 30) and this can bias the correction for the modes which you can reliably recreate on the DM.

In either case, a flatten_phase routine which is running as intended, without errors, can yield this result. I would check a) that your control matrix is of an adequate quality and b) which Zernike modes you are attempting to correct for in the flatten_phase routine. Those are equally likely sources of error as any bug in the code.

@iandobbie
Copy link
Member Author

This is entirely possible if the routine is running correctly. If at no point during the flattening routine do you obtain a better wavefront error than is obtained with 0.5 applied to all actuators, then those are the value which will be saved as the system flat values because those are indeed the actuators which yielded the best wavefront error.

However the logging functions clearly show significantly smaller RMS errors (like 5x smaller) and yet these are not stored as the best flat.

I would suspect this occurs due to a poor control matrix that has significant co-variance in the Zernike modes, meaning applying one Zernike mode introduces significant amplitudes of other Zernike modes.

This is possible but I did check and the characterisation seemed reasonable.

In either case, a flatten_phase routine which is running as intended, without errors, can yield this result. I would check a) that your control matrix is of an adequate quality and b) which Zernike modes you are attempting to correct for in the flatten_phase routine. Those are equally likely sources of error as any bug in the code.

I have consistently seen much lower RMS errors that the original or the "best_error" being ignored, but I see your point about removing the 0-2 zernike modes. I will have a think about the best solution.

@NickHallPhysics
Copy link
Collaborator

This is possible but I did check and the characterisation seemed reasonable.

I have consistently seen much lower RMS errors that the original or the "best_error" being ignored, but I see your point about removing the 0-2 zernike modes. I will have a think about the best solution.

That is definitely erroneous behaviour.

However the logging functions clearly show significantly smaller RMS errors (like 5x smaller) and yet these are not stored as the best flat.

To clarify, when you say they aren't being stored, are you referring to the to remote side or the main (i.e. Cockpit) side? So, is the best_error being updated when better wavefront errors are obtained from flattening and they aren't being saved to the config, or are they being ignored in the flatten_phase routine?

@iandobbie
Copy link
Member Author

To clarify, when you say they aren't being stored, are you referring to the to remote side or the main (i.e. Cockpit) side? So, is the best_error being updated when better wavefront errors are obtained from flattening and they aren't being saved to the config, or are they being ignored in the flatten_phase routine?

I think I understand this better after playing on cryosim today. I see your point about removing tip/tilt/piston. However I am very surprised that a large number of repeats was typically giving me 1 as the error before correction and 2 as the error after correction. I know 1 is pretty good but still seems strange that nay correction makes things worse and despite many repeats the system flat stayed at 0.5 everywhere.

We are back on deepsim today and I will have another look

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants