-
Notifications
You must be signed in to change notification settings - Fork 2
Operational model dependencies
.bashrc on CETO does some module loads, we should excise them then fix broken modules
You have to manually login to remote hosts at least once to get the id yes/no out of the way, this should be avoidable by passing the correct arguments to the ssh comands to ignore this
Add fallback to previous forecasts
The rolling archive only covers -n days from current and +m. This means you can't use the current setup to run hindcast models further back
Fallback so if data not downloaded then kicks of gfs retrieval for correct dates
If the suite falls over on this task it is most likely an upstream error since ungrib and metgrid don't necessarily return an error when they fail.
An error in one of the GFS files might be identifiable by looking in ~/cylc-run/suite-name/work/cycle-point/run_ungrib/
if there are still PFILES in there then it means ungrib didn't run all the way through as they should be cleaned up at the end. By looking at the FLX files in ~/cylc-run/suite-name/share/cycle/cycle-point/
the likely dodgy file is the next timestep after the final FLX file. If you look up the soft link in '~/Data/GFS_rolling/forecast_data/` for the relevant file and delete it (the file not the link since the links are remade every time the WRF suite runs) then this will hopefully deal with it.
General: Make paths generic to remote/host
Required CETO modules:
Required python packages: PyFVCOM
fvcom_rivers (river model only) tensorflow (river model only) keras (river model only)
Where do we hold this so its 'safe'? Fallback for files already exist (i.e. restarting suite)
Add fallback to use previous forecast, and then to non-atmospheric run if WRF doesn't exist WRF today file is overwritten, if FVCOM has fallen behind it won't exist for the right days
Add startup task to update history using WRF archive Add fallback to no rivers if insufficient WRF data Check river numbers are sensible
Requires donor file, this could be self generated
Add fallback to use previous CMEMS if none available
Check wrf file covers the appropriate period Pick up old wrf files if not concurrent with forecast run
The Python MPI causes errors both if there are more processes provided than operations to do and also if the file to regrid is too large (2gb limit on pickling used by the gather function). Both of these should be solvable
Lots of functionality still to add here, esp archiving of forecast data