@@ -34,4 +32,4 @@ At the moment, we are actively supporting:
The best way to get our help is by raising an issue on the [community forum](https://forum.access-hive.org.au/) with tags `help` and another tag for the specific framework.
-In the future, we are also aiming to support a broader range of frameworks and recipes.
\ No newline at end of file
+In the future, we are also aiming to support a broader range of frameworks and recipes which are currently not supported (see [our community resource lists](../../community_resources/community_med/index.md) for this collection).
\ No newline at end of file
diff --git a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool.md b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool.md
index 4f9ab009f..5823d465d 100644
--- a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool.md
+++ b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool.md
@@ -1,12 +1,22 @@
# Tutorial for using `esmvaltool` on Gadi@NCI
-![ESMValTool-logo](https://docs.esmvaltool.org/en/latest/_static/ESMValTool-logo-2.png)
-
-{% include "call_contribute.md" %}
-
-
-
-[ACCESS ESMValTool Worflow recipe status][esmvaltool-workflow-repository]
+`esmvaltool` is the Earth System Model Evaluation Tool.
+
+???+ warning "Support Level: Supported on Gadi, but not owned by ACCESS-NRI"
+
+ ESMValTool is a community-developed climate model diagnostics and evaluation software package.
+
+ ACCESS-NRI does not own the code of ESMValTool, but actively supports the use of ESMValTool on Gadi.
+ ACCESS-NRI provides access to the latest version of ESMValTool via the `xp65` access-med conda environment deployed on NCI-Gadi.
+
+
## About ESMValTool
@@ -70,32 +80,41 @@ The ESMValTool is released under the Apache License, version 2.0. Citation of th
Besides the above citation, users are kindly asked to register any journal articles (or other scientific documents) that use the software at the ESMValTool webpage (http://www.esmvaltool.org/). Citing the Software Documentation Paper and registering your paper(s) will serve to document the scientific impact of the Software, which is of vital importance for securing future funding. You should consider this an obligation if you have taken advantage of the ESMValTool, which represents the end product of considerable effort by the development team.
-## ESMValTool recipes examples
+## ESMValTool recipe examples
-Below you can find the recipes from `esmvaltool` that we are providing to run on Gadi. The original recipes are
+To find the available recipes, please go see the [ACCESS ESMValTool Worflow recipe status][esmvaltool-workflow-repository]
+
+Below we showcase example recipes from `esmvaltool` that we are providing to run on Gadi:
-
diff --git a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb.md b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb.md
index b3497e70a..d3ca5bf8b 100644
--- a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb.md
+++ b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb.md
@@ -1,6 +1,15 @@
# `ilamb` on Gadi at NCI
-ACCESS-NRI is maintaining a version of the `python` package `ilamb` for International Land Model Benchmarking (ILAMB) and International Ocean Model Benchmark (IOMB) on Gadi at the National Compuational Infrastructure (NCI).
+`ilamb` is a Python framework for for International Land Model Benchmarking (ILAMB) and International Ocean Model Benchmark (IOMB).
+
+???+ warning "Support Level: Supported on Gadi, but not owned by ACCESS-NRI"
+
+ ILAMB/IOMB is a community-developed climate model diagnostics and evaluation software package.
+
+ ACCESS-NRI does not own the code of ILAMB/IOMB, but actively supports the use of ILAMB/IOMB on Gadi.
+ ACCESS-NRI provides access to the latest version of ILAMB/IOMB via the `xp65` access-med conda environment deployed on NCI-Gadi.
+
+ACCESS-NRI is maintaining a version of the package `ilamb` on Gadi at the National Compuational Infrastructure (NCI).
Here, we provide a quick tutorial on how use `ilamb` on Gadi. We assume that you already have access to Gadi, logged onto Gadi via secure shell (ssh) and loaded our `access-med` `conda` environment (if not, follow [these instructions](../model_evaluation_getting_started/index.md)).
diff --git a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_metplus.md b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_metplus.md
index d18f9336c..f7fd2e370 100644
--- a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_metplus.md
+++ b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_metplus.md
@@ -2,7 +2,12 @@
[METplus](https://dtcenter.org/community-code/metplus) is the enhanced Model Evaluation Tools (METplus) verification system.
-???+ int "ACCESS-NRI is actively supporting METplus on Gadi"
+???+ warning "Support Level: Supported on Gadi, but not owned by ACCESS-NRI"
+
+ METplus was developed by the Developmental Testbed Center (DTC) and is being actively developed by NCAR/Research Applications Laboratory (RAL), NOAA/Earth Systems Research Laboratories (ESRL), NOAA/Environmental Modeling Center (EMC), and is open to community contributions.
+
+ ACCESS-NRI does not own the code of METplus, but actively supports the use of METplus on Gadi.
+ ACCESS-NRI provides access to the latest version of ESMValTool via the `access` conda environment deployed on NCI-Gadi.
For detailed information, tutorials and more of [METplus](https://metplus.readthedocs.io/en/latest/index.html), please go to the
@@ -16,7 +21,7 @@ For detailed information, tutorials and more of [METplus](https://metplus.readth
## What is METplus?
-[METplus](https://dtcenter.org/community-code/metplus) is a verification framework that spans a wide range of temporal (warn-on-forecast to climate) and spatial (storm to global) scales. It is intended to be extensible through additional capability developed by the community The core components of the framework include the [Model Evaluation Tools (MET)](https://met.readthedocs.io/en/latest/), the associated database and display systems called METviewer and METexpress, and a suite of Python wrappers to provide low-level automation and examples, also called use-cases. METplus will be a component of NOAA's Unified Forecast System (UFS) cross-cutting infrastructure as well as NCAR's System for Integrated Modeling of the Atmosphere (SIMA). METplus was developed by the Developmental Testbed Center (DTC) and is being actively developed by NCAR/Research Applications Laboratory (RAL), NOAA/Earth Systems Research Laboratories (ESRL), NOAA/Environmental Modeling Center (EMC), and is open to community contributions.
+[METplus](https://dtcenter.org/community-code/metplus) is a verification framework that spans a wide range of temporal (warn-on-forecast to climate) and spatial (storm to global) scales. It is intended to be extensible through additional capability developed by the community The core components of the framework include the [Model Evaluation Tools (MET)](https://met.readthedocs.io/en/latest/), the associated database and display systems called METviewer and METexpress, and a suite of Python wrappers to provide low-level automation and examples, also called use-cases. METplus will be a component of NOAA's Unified Forecast System (UFS) cross-cutting infrastructure as well as NCAR's System for Integrated Modeling of the Atmosphere (SIMA).
## Showcase of METplus 5.0
diff --git a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima.md b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima.md
index 7c1bccf52..d056b304f 100644
--- a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima.md
+++ b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima.md
@@ -1,12 +1,15 @@
# COSIMA Cookbook on NCI's Gadi
-COSIMA is the [Consortium for Ocean-Sea Ice Modelling in Australia](http://cosima.org.au/), which brings together Australian researchers involved in global ocean and sea ice modelling.
+???+ warning "Support Level: Supported on Gadi, but not owned by ACCESS-NRI"
+
+ The COSIMA Cookbook is developed and maintained by the Consortium for Ocean-Sea Ice Modelling in Australia.
+
+ ACCESS-NRI does not own the code of the COSIMA Cookbook, but actively supports the use of the COSIMA Cookbook and its collection of `cosmia-recipes` on Gadi.
+ ACCESS-NRI provides access to the latest version of the COSIMA Cookbook via the `hh5` access-med conda environment deployed on NCI-Gadi.
-The COSIMA Cookbook is a framework for analysing output from ocean-sea ice models. The focus is on the [ACCESS-OM2](../../models/configurations/access-om.md) suite of models being developed and run by members of [COSIMA]((http://cosima.org.au/)). But this framework is suited to analysing any MOM5/MOM6 output, as well as output from other models.
+COSIMA is the Consortium for Ocean-Sea Ice Modelling in Australia, which brings together Australian researchers involved in global ocean and sea ice modelling. The consortium provides a collection of `cosmia-recipes` for the evaluation of ocean-sea ice modelling that are currated for you on Gadi.
-???+ warning "The COSIMA Cookbook is a framework by COSIMA"
- The COSIMA Cookbook itself is maintained by the COSIMA members.
- ACCESS-NRI is only providing support for the COSIMA Cookbook and its collection of `cosmia-recipes` for the evaluation of ocean-sea ice modelling on Gadi.
+The COSIMA Cookbook is a framework for analysing output from ocean-sea ice models. The focus is on the [ACCESS-OM2](../../models/configurations/access-om.md) suite of models being developed and run by members of [COSIMA]((http://cosima.org.au/)). But this framework is suited to analysing any MOM5/MOM6 output, as well as output from other models.
## Getting Started
diff --git a/docs/models/run-a-model/run-access-cm.md b/docs/models/run-a-model/run-access-cm.md
index 372b7377c..79df67617 100644
--- a/docs/models/run-a-model/run-access-cm.md
+++ b/docs/models/run-a-model/run-access-cm.md
@@ -1,6 +1,5 @@
{% set model = "ACCESS-CM" %}
-
-#
Run {{ model }}
+# Run {{ model }}
##
Requirements
Before running {{ model }}, you need to make sure to possess the right tools and to have an account with specific institutions.
@@ -51,31 +50,31 @@ To copy an existing suite, on
accessdev:
Run
mosrs-auth
to authenticate using your
MOSRS credentials:
-
+
mosrs-auth
Please enter the MOSRS password for <MOSRS-username>:
- Successfully authenticated with MOSRS as <MOSRS-username>
-
+
Successfully authenticated with MOSRS as <MOSRS-username>
+
Run
rosie checkout <suite-ID>
to create a local copy of the <suite-ID>
from the UKMO repository (used mostly for testing and examining existing suites):
-
+
rosie checkout <suite-ID>
[INFO] create: /home/565/<$USER>/roses
[INFO] <suite-ID>: local copy created at /home/565/<$USER>/roses/<suite-ID>
-
+
Alternatively, run
rosie copy <suite-ID>
to create a new full copy (local and remote in the UKMO repository) rather than just a local copy. When a new suite is created in this way, a new unique name is generated within the repository, and populated with some descriptive information about the suite along with all the initial configuration details:
-
+
rosie copy <suite-ID>
Copy "<suite-ID>/trunk@<trunk-ID>" to "u-?????"? [y or n (default)] y
[INFO] <new-suite-ID>: created at https://code.metoffice.gov.uk/svn/roses-u/<suite-n/a/m/e/>
[INFO] <new-suite-ID>: copied items from <suite-ID>/trunk@<trunk-ID>
[INFO] <suite-ID>: local copy created at /home/565/<$USER>/roses/<new-suite-ID>
-
+
For additional
rosie
options, run
@@ -90,10 +89,10 @@ The suite directory usually contains 2 subdirectories and 3 files:
rose-suite.conf
→ the main suite configuration file.
rose-suite.info
→ suite information file.
suite.rc
→ the Cylc control script file (Jinja2 language).
-
+
ls ~/roses/<suite-ID>
app meta rose-suite.conf rose-suite.info suite.rc
-
+
----------------------------------------------------------------------------------------
@@ -110,13 +109,13 @@ to open the
Rose GUI and inspect the suite information.
The &
is optional and keeps the terminal prompt active while runs the GUI as a separate process.
-
+
cd ~/roses/<suite-ID>
rose edit &
[<N>] <PID>
-
+
### Change NCI project
To make sure we run the suite under the NCI project we belong to, we can navigate to
suite conf → Machine and Runtime Options, edit the
Compute project field, and click the
Save button
. (Check
how to connect to a project if you have not joined one yet).
@@ -168,7 +167,7 @@ To run an {{ model }} suite, on
accessdev:
After the initial tasks get executed, the Cylc GUI will open up and you will be able to see and control all the different tasks in the suite as they are run:
-
+
cd ~/roses/<suite-ID>
rose suite-run
[INFO] export CYLC_VERSION=7.8.3
@@ -215,7 +214,7 @@ To run an {{ model }} suite, on accessdev:
[INFO] $ cylc ping -v --host=accessdev.nci.org.au <suite-ID>
[INFO] $ ps -opid,args <PID> # on accessdev.nci.org.au
-
+
If after you run the command
rose suite-run
you get an error similar to the following:
[FAIL] Suite "<suite-ID>" appears to be running:
@@ -268,7 +267,7 @@ To investigate the cause of a failure, we need to look at the logs (job.er
They are then further separated into "attempts" (consecutive failed/successful tasks), with NN
being a symlink to the most recent attempt.
In our example, the failure occurred for the 09500101 simulation cycle (starting date on 1st January 950) in the coupled task. Therefore, the directory where to find the job.err
and job.out
files is ~/cylc-run/<suite-ID>/log/job/09500101/coupled/NN
.
-
+
cd ~/cylc-run/<suite-ID>
ls
app cylc-suite.db log log.20230530T051952Z meta rose-suite.info share suite.rc suite.rc.processed work
@@ -287,7 +286,7 @@ To investigate the cause of a failure, we need to look at the logs (job.er
cd NN
ls
job job-activity.log job.err job.out job.status
-
+
----------------------------------------------------------------------------------------
@@ -301,13 +300,13 @@ To scan for active suites run
cylc scan
To reopen the Cylc GUI, from inside the suite directory run
rose suite-gcontrol
-
+
cylc scan
<suite-ID> <$USER>@accessdev.nci.org.au:<port>
cd ~/roses/<suite-ID>
rose suite-gcontrol
-
+
### STOP a suite
To shutdown a suite in a safe manner, from inside the suite directory run
@@ -334,7 +333,7 @@ There are two main ways to restart a suite:
rose suite-run --restart
to re-install the suite and reopen Cylc in the same state as when it was stopped (you may need to manually trigger failed tasks from the Cylc GUI).
-
+
cylc
cd ~/roses/<suite-ID>
rose suite-run --restart
@@ -366,7 +365,7 @@ There are two main ways to restart a suite:
[INFO] $ cylc ping -v --host=accessdev.nci.org.au <suite-ID>
[INFO] $ ps -opid,args <PID> # on accessdev.nci.org.au
-
+
'HARD' restart
@@ -416,7 +415,7 @@ This directory contains 2 subdirectories:
For the atmospheric output data, each file it is usually a UM fieldsfile or netCDF file, formatted as <suite-name>a.p<output-stream-identifier><year><month-string>
.
In the case of the u-br565
suite we will have:
-
+
cd /scratch/<$PROJECT>/<$USER>/archive
ls
br565 <other-suite-name> <other-suite-name>
@@ -425,7 +424,7 @@ In the case of the u-br565
suite we will have:
history restart
ls history/atm
br565a.pd0950apr.nc br565a.pd0950aug.nc br565a.pd0950dec.nc br565a.pd0950feb.nc br565a.pd0950jan.nc br565a.pd0950jul.nc br565a.pd0950jun.nc br565a.pd0950mar.nc br565a.pd0950may.nc br565a.pd0950nov.nc br565a.pd0950oct.nc br565a.pd0950sep.nc br565a.pd0951apr.nc br565a.pd0951aug.nc br565a.pd0951dec.nc br565a.pm0950apr.nc br565a.pm0950aug.nc br565a.pm0950dec.nc br565a.pm0950feb.nc br565a.pm0950jan.nc br565a.pm0950jul.nc br565a.pm0950jun.nc br565a.pm0950mar.nc br565a.pm0950may.nc br565a.pm0950nov.nc br565a.pm0950oct.nc br565a.pm0950sep.nc br565a.pm0951apr.nc br565a.pm0951aug.nc br565a.pm0951dec.nc netCDF
-
+
@@ -439,10 +438,10 @@ In the directory there are also some files formatted as <suite-name>
For more details on how to control the frequency and formatting of restart dumps, check Rose GUI user guide (TO CHECK). -->
In the case of the u-br565
suite we will have:
-
+
ls /scratch/<$PROJECT>/<$USER>/archive/br565/restart/atm
br565a.da09500201_00 br565a.da09510101_00 br565.xhist-09500131 br565.xhist-09501231
-
+
References
diff --git a/docs/models/run-a-model/run-access-esm.md b/docs/models/run-a-model/run-access-esm.md
index 8aa385149..703c1bc2c 100644
--- a/docs/models/run-a-model/run-access-esm.md
+++ b/docs/models/run-a-model/run-access-esm.md
@@ -1,5 +1,5 @@
{% set model = "ACCESS-ESM" %}
-# Run {{ model }}
+# Run {{ model }}
## Requirements
Before running {{ model }}, you need to make sure to possess the right tools and to have an account with specific institutions.
@@ -30,10 +30,10 @@ For the general requirements needed to run all ACCESS models, please refer to th
To check that payu is effectively available, you can run:
payu --version
-
+
payu --version
1.0.19
-
+
----------------------------------------------------------------------------------------
@@ -43,7 +43,7 @@ A suitable {{ model }} pre-industrial configuration is avaible on the mkdir -p ~/access-esm
cd ~/access-esm
git clone https://github.com/coecms/esm-pre-industrial
@@ -54,7 +54,7 @@ In order to get it, on Gadi, create a directory where to keep the model c
remote: Total 767 (delta 173), reused 274 (delta 157), pack-reused 472
Receiving objects: 100% (767/767), 461.57 KiB | 5.24 MiB/s, done.
Resolving deltas: 100% (450/450), done.
-
+
Some modules might interfere with the
git
commands (for example matlab/R2018a). If you are running into issues during the cloning of the repository, it might be a good idea to run
module purge
first, before trying again.
@@ -86,7 +86,7 @@ This will create the laboratory directory, along with other subdirectorie
work
→ temporary directory where the model is actually run. It gets cleaned after each run.
archive
→ directory where the output is placed after each run.
-
+
cd ~/access-esm/esm-pre-industrial
payu init
laboratory path: /scratch/$PROJECT/$USER/access-esm
@@ -94,7 +94,7 @@ This will create the laboratory directory, along with other subdirectorie
input path: /scratch/$PROJECT/$USER/access-esm/input
work path: /scratch/$PROJECT/$USER/access-esm/work
archive path: /scratch/$PROJECT/$USER/access-esm/archive
-
+
### Edit the Master Configuration file
@@ -223,7 +223,7 @@ After editing the configuration, we are ready to run {{ model }}.
As a first step, from the control directory, is good practice to run:
payu setup
This will prepare the model run, based on the experiment configuration.
-
+
payu setup
laboratory path: /scratch/$PROJECT/$USER/access-esm
binary path: /scratch/$PROJECT/$USER/access-esm/bin
@@ -243,7 +243,7 @@ This will prepare the model run, based on the experiment configuration.
Updating full hashes for 30 files in manifests/restart.yaml
Writing manifests/restart.yaml
Writing manifests/exe.yaml
-
+
You can skip this step as it is included also in the run command. However, runnning it explicitly helps to check for errors and make sure executable and restart directories are accessible.
@@ -256,7 +256,7 @@ This will submit a single job to the queue with a total run length of runt
The -f
option ensures that payu will run even if there is an existing non-empty work directory, which happens if a run crashes.
-
+
payu run -f
Loading input manifest: manifests/input.yaml
Loading restart manifest: manifests/restart.yaml
@@ -264,7 +264,7 @@ This will submit a single job to the queue with a total run length of runt
payu: Found modules in /opt/Modules/v4.3.0
qsub -q normal -P <project> -l walltime=11400 -l ncpus=384 -l mem=1536GB -N pre-industrial -l wd -j n -v PAYU_PATH=/g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin,MODULESHOME=/opt/Modules/v4.3.0,MODULES_CMD=/opt/Modules/v4.3.0/libexec/modulecmd.tcl,MODULEPATH=/g/data3/hh5/public/modules:/etc/scl/modulefiles:/opt/Modules/modulefiles:/opt/Modules/v4.3.0/modulefiles:/apps/Modules/modulefiles -W umask=027 -l storage=gdata/access+gdata/hh5 -- /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin/python3.9 /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin/payu-run
<job-ID>.gadi-pbs
-
+
### Run configuration for multiple years
If you want to run {{ model }} configuration for multiple internal run lengths (controlled by runtime
in the config.yaml
file), you can use the option -n
:
@@ -332,14 +332,14 @@ Currently, there is no specific tool to monitor {{ model }} runs.
One way to check the status of our run is running:
qstat -u $USER
This will show the status of all your PBS jobs (if there is any PBS job submitted):
-
+
qstat -u $USER
Job id Name User Time Use S Queue
--------------------- ---------------- ---------------- -------- - -----
<job-ID>.gadi-pbs pre-industrial <$USER> <time> R normal-exec
<job-ID-2>.gadi-pbs <other-job-name> <$USER> <time> R normal-exec
<job-ID-3>.gadi-pbs <other-job-name> <$USER> <time> R normal-exec
-
+
If you changed the jobname
in the PBS resources of the Master Configuration file, that will be your job's Name instead of pre-industrial
.
S indicates the status of your run:
@@ -370,13 +370,13 @@ The format of a typical output folder is outputXXX
, whereas the typ
In the respective folders, outputs and restarts are separated for each model component.
For the atmospheric output data, each file it is usually a UM fieldsfile, formatted as <UM-suite-identifier>a.p<output-stream-identifier><time-identifier>
.
-
+
cd /scratch/$PROJECT/$USER/access-esm/archive/esm-pre-industrial
ls
output000 pbs_logs restart000
ls output000/atmosphere
aiihca.daa1210 aiihca.daa1810 aiihca.paa1apr aiihca.paa1jun aiihca.pea1apr aiihca.pea1jun aiihca.pga1apr aiihca.pga1jun atm.fort6.pe0 exstat ihist prefix.CNTLGEN UAFLDS_A aiihca.daa1310 aiihca.daa1910 aiihca.paa1aug aiihca.paa1mar aiihca.pea1aug aiihca.pea1mar aiihca.pga1aug aiihca.pga1mar cable.nml fort.57 INITHIS prefix.PRESM_A um_env.py aiihca.daa1410 aiihca.daa1a10 aiihca.paa1dec aiihca.paa1may aiihca.pea1dec aiihca.pea1may aiihca.pga1dec aiihca.pga1may CNTLALL ftxx input_atm.nml SIZES xhist aiihca.daa1510 aiihca.daa1b10 aiihca.paa1feb aiihca.paa1nov aiihca.pea1feb aiihca.pea1nov aiihca.pga1feb aiihca.pga1nov CONTCNTL ftxx.new namelists STASHC aiihca.daa1610 aiihca.daa1c10 aiihca.paa1jan aiihca.paa1oct aiihca.pea1jan aiihca.pea1oct aiihca.pga1jan aiihca.pga1oct debug.root.01 ftxx.vars nout.000000 thist aiihca.daa1710 aiihca.daa2110 aiihca.paa1jul aiihca.paa1sep aiihca.pea1jul aiihca.pea1sep aiihca.pga1jul aiihca.pga1sep errflag hnlist prefix.CNTLATM UAFILES_A
-
+
----------------------------------------------------------------------------------------
References
diff --git a/docs/models/run-a-model/run-access-om.md b/docs/models/run-a-model/run-access-om.md
index 6ce8698b0..f467b01e2 100644
--- a/docs/models/run-a-model/run-access-om.md
+++ b/docs/models/run-a-model/run-access-om.md
@@ -1,6 +1,6 @@
{% set model = "ACCESS-OM" %}
-# Run {{ model }}
+# Run {{ model }}
## Requirements
Before running {{ model }}, you need to make sure to possess the right tools and to have an account with specific institutions.
@@ -34,10 +34,10 @@ For the general requirements needed to run all ACCESS models, please refer to th
To check that payu is effectively available, you can run:
payu --version
-
+
payu --version
1.0.19
-
+
----------------------------------------------------------------------------------------
@@ -50,7 +50,7 @@ This is a 1° horizontal resolution configuration, with interannual forcing from
In order to get it, on Gadi, create a directory where to keep the model configuration, and clone the GitHub repo in it by running:
git clone https://github.com/COSIMA/1deg_jra55_iaf.git
-
+
mkdir -p ~/access-om
cd ~/access-om
git clone https://github.com/COSIMA/1deg_jra55_iaf.git
@@ -61,7 +61,7 @@ In order to get it, on Gadi, create a directory where to keep the model c
remote: Total 14715 (delta 3383), reused 3379 (delta 3377), pack-reused 11314
Receiving objects: 100% (14715/14715), 35.68 MiB | 18.11 MiB/s, done.
Resolving deltas: 100% (10707/10707), done.
-
+
Some modules might interfere with the
git
commands (for example matlab/R2018a). If you are running into issues during the cloning of the repository, it might be a good idea to run
module purge
first, before trying again.
@@ -93,7 +93,7 @@ This will create the laboratory directory, along with other subdirectorie
work
→ temporary directory where the model is actually run. It gets cleaned after each run.
archive
→ directory where the output is placed after each run.
-
+
cd ~/access-om/1deg_jra55_iaf
payu init
laboratory path: /scratch/$PROJECT/$USER/access-om2
@@ -101,7 +101,7 @@ This will create the laboratory directory, along with other subdirectorie
input path: /scratch/$PROJECT/$USER/access-om2/input
work path: /scratch/$PROJECT/$USER/access-om2/work
archive path: /scratch/$PROJECT/$USER/access-om2/archive
-
+
### Edit the Master Configuration file
@@ -276,7 +276,7 @@ After editing the configuration, we are ready to run {{ model }}.
As a first step, from the control directory, is good practice to run:
payu setup
This will prepare the model run, based on the experiment configuration.
-
+
payu setup
laboratory path: /scratch/$PROJECT/$USER/access-om2
binary ppath: /scratch/$PROJECT/$USER/access-om2/bin
@@ -295,7 +295,7 @@ This will prepare the model run, based on the experiment configuration.
Creating restart manifest
Writing manifests/restart.yaml
Writing manifests/exe.yaml
-
+
You can skip this step as it is included also in the run command. However, runnning it explicitly helps to check for errors and make sure executable and restart directories are accessible.
@@ -308,7 +308,7 @@ This will submit a single job to the queue with a total run length of rest
The -f
option ensures that payu will run even if there is an existing non-empty work directory, which happens if a run crashes.
-
+
payu run -f
payu: warning: Job request includes 47 unused CPUs.
payu: warning: CPU request increased from 241 to 288
@@ -318,7 +318,7 @@ This will submit a single job to the queue with a total run length of rest
payu: Found modules in /opt/Modules/v4.3.0
qsub -q normal -P tm70 -l walltime=10800 -l ncpus=288 -l mem=1000GB -N 1deg_jra55_iaf -l wd -j n -v PYTHONPATH=/g/data3/tm70/dm5220/scripts/python_modules/,PAYU_PATH=/g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin,PAYU_FORCE=True,MODULESHOME=/opt/Modules/v4.3.0,MODULES_CMD=/opt/Modules/v4.3.0/libexec/modulecmd.tcl,MODULEPATH=/g/data3/hh5/public/modules:/etc/scl/modulefiles:/opt/Modules/modulefiles:/opt/Modules/v4.3.0/modulefiles:/apps/Modules/modulefiles -W umask=027 -l storage=gdata/hh5+gdata/ik11+gdata/qv56 -- /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin/python3.9 /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin/payu-run
<job-ID>.gadi-pbs
-
+
### Run configuration for multiple years
If you want to run {{ model }} configuration for multiple internal run lengths (controlled by restart_period
in the config.yaml
file), you can use the option -n
:
@@ -335,14 +335,14 @@ Currently, there is no specific tool to monitor {{ model }} runs.
One way to check the status of our run is running:
qstat -u $USER
This will show the status of all your PBS jobs (if there is any PBS job submitted):
-
+
qstat -u $USER
Job id Name User Time Use S Queue
--------------------- ---------------- ---------------- -------- - -----
<job-ID>.gadi-pbs 1deg_jra55_iaf <$USER> <time> R normal-exec
<job-ID-2>.gadi-pbs <other-job-name> <$USER> <time> R normal-exec
<job-ID-3>.gadi-pbs <other-job-name> <$USER> <time> R normal-exec
-
+
If you changed the jobname
in the PBS resources of the Master Configuration file, that will be your job's Name instead of 1deg_jra55_iaf
.
S indicates the status of your run:
@@ -371,11 +371,11 @@ Both outputs and restarts are stored into subfolders for each different configur
The format of a typical output folder is outputXXX
, whereas the typical restart folder is usually formatted as restartXXX
, with XXX being the number of internal run, starting from 000
.
In the respective folders, outputs and restarts are separated for each model component.
-
+
cd /scratch/$PROJECT/$USER/access-om2/archive/1deg_jra55_iaf
ls
output000 pbs_logs restart000
-
+
References
diff --git a/mkdocs.yml b/mkdocs.yml
index 1708aced4..0e7d4ca91 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -200,5 +200,5 @@ extra_css:
extra_javascript:
- https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js # For tablesort functionality
- - js/terminal_animation.js
+ - https://cdn.jsdelivr.net/gh/atteggiani/animated-terminal/animated-terminal.min.js
- js/miscellaneous.js