diff --git a/docs/models/run-a-model/run-access-esm.md b/docs/models/run-a-model/run-access-esm.md
index d012ce066..a0dd8463d 100644
--- a/docs/models/run-a-model/run-access-esm.md
+++ b/docs/models/run-a-model/run-access-esm.md
@@ -63,7 +63,7 @@ For the general requirements needed to run all ACCESS models, please refer to th
A suitable ACCESS-ESM pre-industrial configuration is avaible on the coecms GitHub.
In order to get it, on Gadi, create a directory where to keep the model configuration, and clone the GitHub repo in it by running:
-
git clone https://github.com/coecms/esm-pre-industrial
+git clone https://github.com/coecms/esm-pre-industrial.git
module use /g/data/hh5/public/modules
+ module load conda/analysis3
+
+ git clone https://github.com/COSIMA/1deg_jra55_iaf.git
+git
commands (for example matlab/R2018a). If you are running into issues during the cloning of the repository, it might be a good idea to run module purge
first, before trying again.
+/scratch/$PROJECT/$USER/access-esm
.
+ ~/access-esm/esm-pre-industrial
).
+ $HOME
directory (being the only filesystem on gadi that is actively backed up), without overloading it with too much data.
+payu init
+This will create the laboratory directory, along with other subdirectories (depending on the configuration). The main subdirectories we are interested in are:
+work
→ temporary directory where the model is actually run. It gets cleaned after each run.archive
→ directory where the output is placed after each run.config.yaml
file, located in the control directory, is the Master Configuration file.
+jobname: pre-industrial
+ queue: normal
+ walltime: 20:00:00
+
+ These are settings for the PBS scheduler. Edit lines in this section to change any of the PBS resources.
+ tm70
project (ACCESS-NRI), add the following line to this section:
+ project: tm70
+ # note: if laboratory is relative path, it is relative to /scratch/$PROJECT/$USER
+ laboratory: access-esm
+
+ This will set the laboratory directory. Relative paths are relative to /scratch/$PROJECT/$USER
. Absolute paths can be specified as well.
+ model: access
+ The main model. This tells payu which driver to use (access stands for {{ model }}).
+ submodels:
+ - name: atmosphere
+ model: um
+ ncpus: 192
+ exe: /g/data/access/payu/access-esm/bin/coe/um7.3x
+ input:
+ - /g/data/access/payu/access-esm/input/pre-industrial/atmosphere
+ - /g/data/access/payu/access-esm/input/pre-industrial/start_dump
+ - name: ocean
+ model: mom
+ ncpus: 180
+ exe: /g/data/access/payu/access-esm/bin/coe/mom5xx
+ input:
+ - /g/data/access/payu/access-esm/input/pre-industrial/ocean/common
+ - /g/data/access/payu/access-esm/input/pre-industrial/ocean/pre-industrial
+ - name: ice
+ model: cice
+ ncpus: 12
+ exe: /g/data/access/payu/access-esm/bin/coe/cicexx
+ input:
+ - /g/data/access/payu/access-esm/input/pre-industrial/ice
+ - name: coupler
+ model: oasis
+ ncpus: 0
+ input:
+ - /g/data/access/payu/access-esm/input/pre-industrial/coupler
+
+ {{ model }} is a coupled model, which means it has multiple submodels (i.e. model components).
+ ~/access-esm/esm-pre-industrial/atmosphere
).
+ collate:
+ exe: /g/data/access/payu/access-esm/bin/mppnccombine
+ restart: true
+ mem: 4GB
+
+ The collate process joins a number of smaller files, which contain different parts of the model grid, together into target output files. The restart files are typically tiled in the same way and will also be joined together if the restart option is set to true
.
+ restart: /g/data/access/payu/access-esm/restart/pre-industrial
+ The location of the files used for a warm restart.
+ calendar:
+ start:
+ year: 101
+ month: 1
+ days: 1
+ runtime:
+ years: 1
+ months: 0
+ days: 0
+
+ This section specifies the start date and internal run length.
+ runtime
) can be different from the total run length. Also, the runtime
value can be lowered, but should not be increased to a total of more than 1 year, to avoid errors. If you want to know more about the difference between internal run and total run lenghts, or if you want to run the model for more than 1 year, check Run configuration for multiple years.
+ runspersub: 5
+ {{ model }} configurations are often run in multiple steps (or cycles), with payu running a maximum of runspersub
internal runs for every PBS job submission.
+ runspersub
, we might need to increase the walltime in the PBS resources.
+ config.yaml
file, please check how to configure your experiment with payu.
+payu setup
+This will prepare the model run, based on the experiment configuration.
+runtime
in the config.yaml
file), run:
+payu run -f
+This will submit a single job to the queue with a total run length of runtime
. It there is no previous run, it will start from the start
date indicated in the config.yaml
file, otherwise it will perform a warm restart from a precedently saved restart file.
+-f
option ensures that payu will run even if there is an existing non-empty work directory, which happens if a run crashes.
+runtime
in the config.yaml
file), you can use the option -n
:
+payu run -n <number-of-runs>
+This will run the configuration number-of-runs
times with a total run length of runtime * number-of-runs
. The number of consecutive PBS jobs submitted to the queue depends on the runspersub
value specified in the config.yaml
file.
+runtime
, runspersub
, and -n
parameters
+runtime
, runspersub
, and -n
parameters, we can have full control of our run.
+runtime
defines the internal run length.
+ runspersub
defines the maximum number of internal runs for every PBS job submission.
+ -n
sets the number of internal runs to be performed.
+ runtime
to the default value of 1 year
, set runspersub
to 5
, and run the configuration using -n 20
:
+ payu run -n 20
+ This will submit subsequent jobs for the following years: 1 to 5, 6 to 10, 11 to 15, and 16 to 20. With a total of 4 PBS jobs.
+ runtime
to the default value of 1 year
, set runspersub
to 3
, and run the configuration using -n 7
:
+ payu run -n 7
+ This will submit subsequent jobs for the following years: 1 to 3, 4 to 6, and 7. With a total of 3 PBS jobs.
+ runtime
to:
+ years: 0
+ months: 3
+ days: 10
+
+ set runspersub
to 1
(or any value > 1), and run the configuration without -n
(or with -n
equals 1
):
+ payu run
+ runtime
to:
+ years: 0
+ months: 4
+ days: 0
+
+ Since the internal run length is set to 4 months, to resubmit our jobs every 4 months (i.e. every internal run), we have to set runspersub
to 1
. Finally, we will perform 4 internal runs by running the configuration with -n 4
:
+ payu run -n 4
+ qstat -u $USER
+This will show the status of all your PBS jobs (if there is any PBS job submitted):
+jobname
in the PBS resources of the Master Configuration file, that will be your job's Name instead of pre-industrial
.
+jobname
(or if there is no job submitted at all), your run might have successfully completed, or might have been terminated due to an error.
+access.out
and access.err
in the control directory. You can examine these files, as the run progresses, to check on it's status.
+jobname.o<job-ID>
and jobname.e<job-ID>
.
+work
directory to the archive
directory, under /scratch/$PROJECT/$USER/access-esm/archive
(also symlinked in the control directory under ~/access-esm/esm-pre-industrial/archive
).
+esm-pre-industrial
in our case), and inside the configuration folder, they are subdivided for each internal run.
+outputXXX
, whereas the typical restart folder is usually formatted as restartXXX
, with XXX being the number of internal run, starting from 000
.
+<UM-suite-identifier>a.p<output-stream-identifier><time-identifier>
.
+