You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the end of the run where there aren't enough dates to run a full k-worth of data, run_disp_s1_historical_processing.py currently writes out the daac_data_subscriber commands for running the rest in reprocessing mode. However, now that I give it some more thought, I think we can just run it once more as a historical processing with smaller k. (keep m the same) For example, frame 11116 has 284 sensing times. After 18 runs, 14 dates are remaining which is not enough to do another k=15 run. However, I think we can run it with k=14 and the last 14 remaining date range which would be 2023-04-06 to 2023-09-10.
I don't understand the algorithm enough to know if that will make a difference in the product if we mix products w k=15 and then one with k=14. (for other frames k could be much smaller) The alternative would be to run 14 reprocessing jobs with those 14 dates with k=15 which would cause the triggering logic to grab the previous 15 frame dates. But then the compressed-cslcs already represent a lot of that k.
So this is something we need to ask the ADT/PST team - what the desired behavior is. Running in a single historical w k=14 would save days worth of time over running 14 reprocessing job.
After understanding ADT/PST expected desire, we will implement it.
Also there's a small bug where at the end of the run, the application continues to print out the reprocessing commands currently. We need to fix this because 1) it's annoying but more importantly 2) we don't want to submit multiple identical jobs at the end.
The text was updated successfully, but these errors were encountered:
Make the remainder sensing datetime count threshold to be configurable. This could be in settings.yaml or as an argument into the DISP-S1 Historical Processing App
Checked for duplicates
Yes - I've already checked
Alternatives considered
Yes - and alternatives don't suffice
Related problems
No response
Describe the feature request
At the end of the run where there aren't enough dates to run a full k-worth of data,
run_disp_s1_historical_processing.py
currently writes out the daac_data_subscriber commands for running the rest in reprocessing mode. However, now that I give it some more thought, I think we can just run it once more as a historical processing with smaller k. (keep m the same) For example, frame 11116 has 284 sensing times. After 18 runs, 14 dates are remaining which is not enough to do another k=15 run. However, I think we can run it with k=14 and the last 14 remaining date range which would be 2023-04-06 to 2023-09-10.I don't understand the algorithm enough to know if that will make a difference in the product if we mix products w k=15 and then one with k=14. (for other frames k could be much smaller) The alternative would be to run 14 reprocessing jobs with those 14 dates with k=15 which would cause the triggering logic to grab the previous 15 frame dates. But then the compressed-cslcs already represent a lot of that k.
So this is something we need to ask the ADT/PST team - what the desired behavior is. Running in a single historical w k=14 would save days worth of time over running 14 reprocessing job.
After understanding ADT/PST expected desire, we will implement it.
Also there's a small bug where at the end of the run, the application continues to print out the reprocessing commands currently. We need to fix this because 1) it's annoying but more importantly 2) we don't want to submit multiple identical jobs at the end.
The text was updated successfully, but these errors were encountered: