diff --git a/docs/getting_started/are.md b/docs/getting_started/are.md index ada5a4491..c99334f2a 100644 --- a/docs/getting_started/are.md +++ b/docs/getting_started/are.md @@ -4,9 +4,10 @@
ARE can give you access to NCI’s Gadi supercomputer and data collections. - There are multiple applications included in ARE, but the two most used for ACCESS-related activities are Virtual Desktop (VDI) and JupyterLab. +## Prerequisites +To use ARE, you must have an NCI account and be a member of a project with computing resources (SU). If you are new to ACCESS, follow the First Steps. ## Start an ARE session
diff --git a/docs/model_evaluation/model_diagnostics/index.md b/docs/model_evaluation/model_diagnostics/index.md index 81b08a63a..f07f16070 100644 --- a/docs/model_evaluation/model_diagnostics/index.md +++ b/docs/model_evaluation/model_diagnostics/index.md @@ -2,11 +2,11 @@ ## What is Model Live Diagnostics? -The Model Live Diagnostics framework is a simple, easy to use and accessible Jupter-based framework for the ACCESS modelling community to check, monitor, visualise and evaluate model behaviour and progress on currently running or ‘live’ ACCESS models on the Australian NCI supercomputer Gadi. +Model Live Diagnostics is a simple, accessible and easy to use Jupter-based framework for the ACCESS modelling community to monitor, visualise and evaluate the behaviour of models in real time (live) while they run on Gadi. -In addition to monitoring a live model, the package provides the functionality to load, visualise and compare legacy ACCESS model data with the selected live user model. +In addition to monitoring a live model, the package also provides the functionality to load, visualise and compare legacy ACCESS model data with the live model. -For detailed information, tutorials and more, please go to the +For more information and tutorials, please visit:
-## Showcase: Monitoring total seawater mass of an ACCESS CM2 run +## Showcase: Monitoring total seawater mass of an ACCESS-CM2 run -In our showcase, we will monitor the progress of an [ACCESS Coupled Model 2 (ACCESS-CM2)](/models/run-a-model/run-access-cm) run. +In this showcase, monitoring the progress of an [ACCESS Coupled Model 2 (ACCESS-CM2)](/models/run-a-model/run-access-cm) run will be shown. -We first start a session (for details on the paths and package see the documentation) to automatically check for new model output with a given period (here: 20 minutes): +To start a session that automatically checks for new model output within a given period of 20 minutes: ``` import med_diagnostics session = med_diagnostics.session.CreateModelDiagnosticsSession(model_type='CM2', model_path='path/to/your/live/model/data/output', period=20) ``` -Once a session is started, you will see the following sesion summary and blue status message while the new intake catalogue is being built from the live model data. Depending on the size of the model data, this can take a number of minutes. +
+ For more details on paths and packages refer to the ACCESS-NRI Model Diagnostics documentation. +
+ +When the session starts, you will see the following session summary:
Output of the Model Live Diagnostics after a session has been started and a new catalogue is being built.
-Once the live model data catalogue has been successfully built, the blue status message will update and the orange status message will report the time and date of the last live model catalogue build. +
+ The blue status message box appears while the new intake catalogue is being built from the live model data. Depending on the size of the model data, this can take several minutes. +
+Once the live model data catalogue has been successfully built, the blue status message will update. +
+The orange status message will report the time and date of the last live model catalogue build, as shown below:
Output of the Model Live Diagnostics after the catalogue has been built.
-All available datasets from the selected model will be listed in the dropdown. Select the dataset you wish to monitor and click ‘Load dataset’. +All available datasets for the selected model will be listed in the dropdown menu. +
+Select the dataset that you want to monitor (e.g., `ocean_scalar.1mon`) and click `Load dataset`.
Output of the Model Live Diagnostics with a dropdown menu of available datasets.
-Once loaded, a plot displaying the first data variable in the list will appear. Use the dropdown list to select and plot any available model variables. +Once loaded, a plot will display the first data variable selected from the list. +
+Use the dropdown menu to select and plot any available model variables listed.
Plot of total liquid seawater mass over time of the ‘live’ ACCES CM2 run.
-With a few more clicks, you can also load legacy data to compare with, for example other CM2 models like by578 or by578a: +It is also possible to load and compare legacy data, such as other ACCESS-CM2 models by578 and by578a:
Plot of total liquid seawater mass over time of the ‘live’ ACCES CM2 run when compared to legacy model data. diff --git a/docs/model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started.md b/docs/model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started.md index e57a57212..5c183aeae 100644 --- a/docs/model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started.md +++ b/docs/model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started.md @@ -1,17 +1,17 @@ -# `conda` Environment for Model Evaluation on Gadi +# Conda Environment for Model Evaluation on Gadi -If you do not yet have `ssh` access to Gadi, refer to instructions on how to login to Gadi. +If you do not have `ssh` access to Gadi, refer to instructions on how to login to Gadi. -The following instructions explain how to load the curated `python` environment on NCI, which includes packages and scripts supported by ACCESS-NRI. Once loaded, these can be run directly on Gadi via `ssh`, `PBS` scripts, or in `JupyterLab`. +The following instructions explain how to load the curated python environment on NCI, which includes packages and scripts supported by ACCESS-NRI. Once loaded, these can be run directly on Gadi via `ssh`, Portable Batch System (PBS) scripts, or in JupyterLab. -???+ warning "ACCESS-NRI can provide code and support, but not computing resources" +???+ warning "ACCESS-NRI provides code and support, but not computing resources" You do not automatically have access to all `/g/data/` storage on Gadi. You need to join an NCI project to view files on `/g/data/$PROJECT`.
For model evaluation and diagnostics, you need to join projects `xp65` and `hh5` for code access and a `$PROJECT` with sufficient compute resources. ## What is the `access-med` environment? -The complete list of dependencies for the `access-med` environment can be found in the environment.yml file of the ACCESS-NRI MED GitHub repository. These include `intake`, `esmvaltool` and `ilamb`: +The complete list of dependencies for the `access-med` environment can be found in the environment.yml file of the ACCESS-NRI MED GitHub repository. Some of these include `intake`, `esmvaltool` and `ilamb`:
List of packages that are provided as part of the xp65 access-med environment
@@ -20,7 +20,7 @@ The complete list of dependencies for the `access-med` environment can be found To avoid running code on Gadi with incompatible packages, a conda environment called `access-med` is provided.
-To change to this curated environment, run the following commands after logging into Gadi and edit your `PBS` script accordingly: +To change to this curated environment, run the following commands after logging into Gadi and edit your PBS script accordingly: ``` module use /g/data/xp65/public/modules module load conda/access-med @@ -28,10 +28,15 @@ module load conda/access-med This will load the latest version of `access-med`, e.g. version `access-med-0.3`.
-To check which `conda` version you are using, run the following command: +To check which python version you are using, run the following command: ``` which python ``` + module use /g/data/xp65/public/modules @@ -43,7 +48,7 @@ which python /g/data/xp65/public/apps/med_conda_scripts/access-med-0.3.d/bin/python -To test everything is working correctly, import the packages in `python3` as follows: +To test everything is working correctly, import the packages in python3 as follows: ```python import numpy as np @@ -56,7 +61,7 @@ print(intake.__version__) print(esmvaltool.__version__) ``` -If you want to run your code on Gadi using a Portable Batch System (`PBS`) job, add the `module use` and `module load` commands to your `PBS` script as shown in the `example_pbs.sh` `PBS` script below: +If you want to run your code on Gadi using a PBS job, add the `module use` and `module load` commands to your PBS script as shown in the `example_pbs.sh` PBS script below: ``` #!/bin/bash @@ -75,35 +80,33 @@ module load conda/access-med python3 your_code.py ``` -The content of `your_code.py` could simply comprise the `import` and `which version` lines from our above example. +The content of `your_code.py` could simply comprise a few lines, such as which conda version and which packages to import.
-To submit this `PBS` job, execute the following command: +To submit your PBS job `example_pbs.sh`, run: ``` qsub example_pbs.sh ``` -In brief: this PBS script will submit a job to Gadi with the job name (`#PBS -N`) *example_pbs* under compute project (`#PBS -P`) `iq82` with a normal queue (`#PBS -q normalbw`), for 1 CPU (`#PBS -l ncpus=1`) with 2 GB RAM (`#PBS -l mem=2GB`), a walltime of 10 minutes (`#PBS -l walltime=00:10:00`) and data storage access to projects `xp65`. Note that for this example to work, you have to be member of the NCI project `xp65` and `iq82`. Adjust the `#PBS -P` option to match your compute project. Upon starting the job, it will change into to the working directory that you submitted the job from (`#PBS -l wd`) and load the access-med conda environment. - -This will submit a job to Gadi with the job name (`#PBS -N`) *example_pbs* under compute project (`#PBS -P`) *iq82* with a normalbw normal queue (`#PBS -q`). The number of CPUs requested is 1 CPU (`#PBS -l ncpus=1`) with 2 GB RAM (`#PBS -l mem=2GB`) and a walltime of 10 minutes (`#PBS -l walltime=00:10:00`). The data storage (`#PBS -l storage=gdata/xp65`) is data storage access to project `xp65`. +The above PBS script will submit a job to Gadi with the job name *example_pbs* (`#PBS -N`) under the `iq82` compute project (`#PBS -P`) in the `normalbw` queue (`#PBS -q`). It will use 1 CPU (`#PBS -l ncpus=1`), 2 GB RAM (`#PBS -l mem=2GB`), a walltime of 10 minutes (`#PBS -l walltime=00:10:00`) and data storage access to projects `xp65`.

Note: to run this example, you need to be a member of an NCI project, in this case `xp65` and `iq82` projects.
Adjust the `#PBS -P` option to match your compute project.
-When the job starts, it will change to the working directory from where you submitted the job (`#PBS -l wd`) and load the access-med `conda` environment. +When the job starts, it will change to the working directory from where you submitted the job (`#PBS -l wd`) and load the `access-med` conda environment.
-
-For more information on running `PBS` jobs on NCI, refer to PBS Jobs. + +For more information on running PBS jobs on Gadi, refer to PBS Jobs. ## Running the `access-med` environment on ARE -NCI also supports an interactive coding environment called Australian Research Environment (ARE). Its use is similar to submitting a `PBS` job via `qsub -I`, but with an added bonus of a dedicated graphical user interface for `Jupyter` notebooks. +NCI also supports an interactive coding environment called the Australian Research Environment (ARE). Its use is similar to submitting a PBS job via `qsub -I`, but with an added bonus of a dedicated graphical user interface for Jupyter notebooks.

-To use ARE, you must have an NCI account and be a member of a project with computing resources (see section on [getting started](../../getting_started/first_steps)). +For more information, check the Australian Research Environment (ARE) getting started. -Once you login to ARE, click on JupyterLab in the Featured Apps section to launch a `JupyterLab` instance. +Once you login to ARE, click on JupyterLab in the Featured Apps section to launch a JupyterLab instance.
Below are some example values that you should change to match your `$PROJECT` and use case: @@ -117,7 +120,7 @@ Below are some example values that you should change to match your `$PROJECT` an - **Modules** `conda/are` - *Launch* (click to submit) -This will launch a `JupyterLab` session with a Session ID, which will appear in the list of interactive sessions. (You can also find it under My Interactive Sessions at the top-left of the ARE window). +This will launch a JupyterLab session with a Session ID, which will appear in the list of interactive sessions. (You can also find it under My Interactive Sessions at the top-left of the ARE window).
The session appears blue while it is loading, yellow or red in case of warnings or errors, and green when it is successfully running: @@ -125,9 +128,9 @@ The session appears blue while it is loading, yellow or red in case of warnings Example of a successfully started ARE Session
-You can then Open JupyterLab by clicking on the button at the bottom of the session. +Launch JupyterLab by clicking on the Open JupyterLab button at the bottom of the session.
-This will open a window which contains a directory structure on the left and a `Jupyter` notebook on the right, as shown below. +This will open a window which contains a directory structure on the left and a Jupyter notebook on the right, as shown below.
If you loaded the modules from `hh5` or `xp65`, you should be able to import python packages such as `numpy`, `xarray` or `intake`, as shown below: diff --git a/docs/model_evaluation/model_evaluation_getting_started/model_variables/index.md b/docs/model_evaluation/model_evaluation_getting_started/model_variables/index.md index 5fab894af..75c03ec18 100644 --- a/docs/model_evaluation/model_evaluation_getting_started/model_variables/index.md +++ b/docs/model_evaluation/model_evaluation_getting_started/model_variables/index.md @@ -19,7 +19,7 @@ Numerous organisations and scientific groups worldwide have adopted a file forma