Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove checks preventing casa gain tables from being written. #448

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

JSKenyon
Copy link
Collaborator

@o-smirnov @bennahugo This is a draft with the checks preventing CASA gain tables from being written removed. See #440.

@JSKenyon
Copy link
Collaborator Author

JSKenyon commented Apr 1, 2021

@o-smirnov @bennahugo I believe that I have fixed this. Would either of you be willing to give this branch a spin to confirm? I have managed to export to casa table format and plot with plotms locally, but would like to be sure.

@caseyjlaw
Copy link

I pulled this branch into my container. It builds and runs, but I didn't see a CASA table written out. I suspect I didn't set my parset file correctly, although the logging said something that implied a table was going to be written.

@JSKenyon
Copy link
Collaborator Author

JSKenyon commented Apr 1, 2021

The casa gain option is --out-casa-gaintables. From the help:

casa-gaintables = 1                 # Export gaintables to CASA caltable format. Tables are exported to same
                                      directory as set for cubical databases. #type:bool

@caseyjlaw
Copy link

I thought that was the default value. Setting it explicitly as you describe, does not seem to change the output.

My parset:

[data]
ms = test.ms
time-chunk = 1
freq-chunk = 1

[model]
list = MODEL_DATA

[weight]
_Help = Weighting options
column = None                           # Weight column to use.

[sol]
jones = B
min-bl=100

[out]
column = CORRECTED_DATA
casa-gaintables = 1

[b]
time-int = 1
freq-int = 1

[model]
list = MODEL_DATA

[weight]
_Help = Weighting options
column = None                           # Weight column to use.

[sol]
jones = B
min-bl=100

[out]
column = CORRECTED_DATA
casa-gaintables = 1

[b]
time-int = 1
freq-int = 1

[gainterm]
ref-ant=34

The output products:

ls /fastpool/claw/cubical.cc-out
cc.bbc-diag-ant.png                   cc-B-field_0-ddid_None.parmdb       cc.chi2.tf.png        cc.parset
cc.bbc-diag.png                       cc-B-field_0-ddid_None.parmdb.skel  cc.log                cc.stats.pickle
cc-BBC-field_0-ddid_None.parmdb       cc-B-field_0-ddid_None.parmdb.tmp   cc.noise.antchan.png
cc-BBC-field_0-ddid_None.parmdb.skel  cc.chi2.antchan.png                 cc.noise.tf.png

@JSKenyon
Copy link
Collaborator Author

JSKenyon commented Apr 1, 2021 via email

@caseyjlaw
Copy link

You can find it at https://www.dropbox.com/s/iyyrz6guyloy0uc/cc.log?dl=0.

@JSKenyon
Copy link
Collaborator Author

JSKenyon commented Apr 6, 2021

@caseyjlaw Apologies for the delay. This is confusing to me - it looks as though the export is reached but there are no outputs. This definitely works for me locally. One thing I did notice is that it looks as though there is something wrong with the data (chi-squared of 1e13 is suspicious). Do you have any ideas @bennahugo?

@IanHeywood
Copy link
Collaborator

Does the MODEL_DATA column contain something sensible?

Another usual suspect is the time/frequency intervals for the solution not being appropriate, but this looks like a bandpass solve on a MS with a single timeslot, so the 1,1 setting probably isn't the culprit.

@caseyjlaw
Copy link

The MODEL_DATA column is filled, if that's what you mean. Quick check with casacore:

> tc = tables.tablecolumn(t, 'MODEL_DATA')
> tc.getcell(0).max()
(20458.932+0j)
> tc.getcell(0).min()
(-4036.609-17.83753j)

So just to be clear, I should see output tables in MS format? But if the solver fails for some reason, it may not write it out?
If the latter is true, then perhaps we could experiment with a different data set or calibration setup.

@bennahugo
Copy link
Collaborator

But model data is the input model. It does not mean the calibration was successful. The behaviour (was) that the tables are written out when the paramdb tables are dumped (an adaptor is constructed around the generated paramdb) and dumped as part of the same call to write out, regardless of chi^2 / quality etc. As long as the calibration runs through the tables would be dumped. Things might have broken since - there are multiple places where the adaptor calls must be made for it to write out in the end. I will need to actually also need to do some testing and check that we haven't introduced bugs in the py3 porting like integer division errors. Even if the tables are dumped they should be taken with a grain of salt for now. I strongly suggest that you use plots derived from paramdb format for production usage for now until I can revisit this - see my script on how these dictionaries are loaded in.

@o-smirnov
Copy link
Collaborator

strongly suggest that you use plots derived from paramdb format for production usage for now until I can revisit this - see my script on how these dictionaries are loaded in.

Or even easier, run plot-gain-solutions --help. This will plot the parmdb tables natively.

@JSKenyon
Copy link
Collaborator Author

@bennahugo if you happen to have a moment whilst implementing your other fixes, would you mind verifying that this works? Don't waste time on it, but I figure having this funcitonliaty will be useful.

@bennahugo
Copy link
Collaborator

yea I'm busy looking at simulations to test this. I know @IanHeywood raised an issue about the phases I believe, but I cannot find it now...

@bennahugo
Copy link
Collaborator

lol I get immediate issues on this one... this is going to take a bit of debugging and rework

Exiting with exception: RuntimeError(ValueHolderRep::asInt - invalid data type Array<Int>)
 Traceback (most recent call last):
  File "/home/hugo/workspace/CubiCal/cubical/main.py", line 578, in main
    stats_dict = workers.run_process_loop(ms, tile_list, load_model, single_chunk, solver_type, solver_opts, debug_opts, out_opts)
  File "/home/hugo/workspace/CubiCal/cubical/workers.py", line 228, in run_process_loop
    return _run_single_process_loop(ms, load_model, single_chunk, solver_type, solver_opts, debug_opts, out_opts)
  File "/home/hugo/workspace/CubiCal/cubical/workers.py", line 382, in _run_single_process_loop
    solver.gm_factory.close()
  File "/home/hugo/workspace/CubiCal/cubical/machines/abstract_machine.py", line 960, in close
    db.close()
  File "/home/hugo/workspace/CubiCal/cubical/database/casa_db_adaptor.py", line 474, in close
    self.__export()
  File "/home/hugo/workspace/CubiCal/cubical/database/casa_db_adaptor.py", line 460, in __export
    casa_caltable_factory.create_B_table(self, "G:gain")
  File "/home/hugo/workspace/CubiCal/cubical/database/casa_db_adaptor.py", line 246, in create_B_table
    viscal_label="B Jones" if diag else "D Jones")
  File "/home/hugo/workspace/CubiCal/cubical/database/casa_db_adaptor.py", line 114, in init_empty
    t.putcell("MEAS_FREQ_REF", iddid, db.spwmeasfreq[spwid])
  File "/home/hugo/workspace/venv/lib/python3.6/site-packages/casacore/tables/table.py", line 1122, in putcell
    self._putcell(columnname, rownr, value)
RuntimeError: ValueHolderRep::asInt - invalid data type Array<Int>

@bennahugo
Copy link
Collaborator

Ok I can reproduce at least something related to the original enquiry by @caseyjlaw
whenever it is not named G it is not dumping....

This needs some serious rework, also to retest things - I don't think anybody really used this feature in anger before we migrated to py3. I'm on AoD duty and I have a few big-ticket things for the next 3-4 weeks @JSKenyon - can't say I will be able to make much headway on this till after that.

@bennahugo
Copy link
Collaborator

To keep track of the remaining issues: #257 (probably related to what I just saw as well), #272 and #258.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants