diff --git a/docs/model_evaluation/index.md b/docs/model_evaluation/index.md index f60191a09..0e02dac79 100644 --- a/docs/model_evaluation/index.md +++ b/docs/model_evaluation/index.md @@ -68,7 +68,7 @@ If you are new to MED and are wondering [*"What is Model Evaluation and Diagnost
COSIMA cookbook
- +
METplus
diff --git a/docs/model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started.md b/docs/model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started.md index e36d505d3..b6c48b535 100644 --- a/docs/model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started.md +++ b/docs/model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started.md @@ -5,11 +5,11 @@ At this stage of *Getting Started*, we assume that you already have access to NC The instructions below explain how to load our curated `python` environment, with packages and scripts which are supported by ACCESS-NRI. Once these instructions have been followed you will be able to use all pacakges and scripts when running directly on Gadi via `ssh`, in `PBS` scripts, or in JupyterLab. ???+ warning "ACCESS-NRI provides code and support, but not computing resources" - As mentioned in the [Getting Started pages](../../get_started), you do not automatically have access to all of Gadi's storage at `/g/data/`, but need to be part of a `$PROJECT` to see files at `/g/data/$PROJECT`. For model evaluation and diagnostics, you need to be part of projects `xp65` and `hh5` for code access and a project with compute resources. + As mentioned in the [Getting Started pages](../../../getting_started), you do not automatically have access to all of Gadi's storage at `/g/data/`, but need to be part of a `$PROJECT` to see files at `/g/data/$PROJECT`. For model evaluation and diagnostics, you need to be part of projects `xp65` and `hh5` for code access and a project with compute resources. ## What is part of the `access-med` enrivonment? -The complete list of dependencies can be found in the [`environment.yml`](https://github.com/ACCESS-NRI/MED-condaenv/blob/main/scripts/environment.yml) file of our [GitHub repository](https://github.com/ACCESS-NRI/MED-condaenv) and includes `intake`, `esmvaltool`, and `ilamb`: +The complete list of dependencies can be found in
this `environment.yml` file of our GitHub repository and includes `intake`, `esmvaltool`, and `ilamb`:
List of packages that are provided as part of the xp65 access-med environment
@@ -79,7 +79,7 @@ The content of `your_code.py` could be as simple as the import and version print qsub example_pbs.sh ``` -In brief: this PBS script will submit a job to Gadi with the job name (`#PBS -N`) *example_pbs* under compute project (`#PBS -P`) `iq82` with a [normal queue](https://opus.nci.org.au/display/Help/Queue+Limits) (`#PBS -q normalbw`), for 1 CPU (`#PBS -l ncpus=1`) with 2 GB RAM (`#PBS -l mem=2GB`), a walltime of 10 minutes (`#PBS -l walltime=00:10:00`) and data storage access to projects `xp65`. Note that for this example to work, you have to be [member of the NCI project](https://my.nci.org.au/mancini/project-search) `xp65` and `iq82`. Adjust the `#PBS -P` option to match your compute project. Upon starting the job, it will change into to the working directory that you submitted the job from (`#PBS -l wd`) and load the access-med conda environment. +In brief: this PBS script will submit a job to Gadi with the job name (`#PBS -N`) *example_pbs* under compute project (`#PBS -P`) `iq82` with a normal queue (`#PBS -q normalbw`), for 1 CPU (`#PBS -l ncpus=1`) with 2 GB RAM (`#PBS -l mem=2GB`), a walltime of 10 minutes (`#PBS -l walltime=00:10:00`) and data storage access to projects `xp65`. Note that for this example to work, you have to be member of the NCI project `xp65` and `iq82`. Adjust the `#PBS -P` option to match your compute project. Upon starting the job, it will change into to the working directory that you submitted the job from (`#PBS -l wd`) and load the access-med conda environment. ## Running our `access-med` environment on NCI's Interactive ARE (JupyterLab) diff --git a/docs/model_evaluation/model_evaluation_model_catalogs/index.md b/docs/model_evaluation/model_evaluation_model_catalogs/index.md index 38b8db5b9..68226ba25 100644 --- a/docs/model_evaluation/model_evaluation_model_catalogs/index.md +++ b/docs/model_evaluation/model_evaluation_model_catalogs/index.md @@ -16,7 +16,7 @@ The ACCESS-NRI catalog is essentially a table of climate data products that exis ## Showcase: use intake to easily find, load and plot data -In this showcase, we'll demonstrate one of the simplest use-cases of the ACCESS-NRI intake catalog: a user wants to plot a timeseries of a variable from a specific data product. Here, the variable is a scalar ocean variable called "temp_global_ave" and the product is an ACCESS-ESM1-5 run called "HI_CN_05", which is an historical run using same configuration as CMIP6 ACCESS-ESM1-5 historical r1i1p1f1, but with phosphorus limitation disabled within CASA-CNP. +In this showcase, we'll demonstrate one of the simplest use-cases of the ACCESS-NRI intake catalog: a user wants to plot a timeseries of a variable from a specific data product. Here, the variable is a scalar ocean variable called "temp_global_ave" and the product is an [ACCESS-ESM1-5](../../models/configurations/access-esm) run called "HI_CN_05", which is an historical run using same configuration as CMIP6 ACCESS-ESM1-5 historical r1i1p1f1, but with phosphorus limitation disabled within CASA-CNP. First we load the catalog using diff --git a/docs/model_evaluation/model_evaluation_observational_catalogs.md b/docs/model_evaluation/model_evaluation_observational_catalogs.md index b4cb72d9c..b8d88823c 100644 --- a/docs/model_evaluation/model_evaluation_observational_catalogs.md +++ b/docs/model_evaluation/model_evaluation_observational_catalogs.md @@ -15,8 +15,8 @@ You can browse and search the available data collections on the NCI Data Collect Examples of the NCI Data Collections include: -- Data of the Earth Systems Grid Federation hosted at the NCI ESGF Node -- Data of the fifth generation of ECMWF atmospheric reanalyses (ERA5) with more information on the NCI ERA5 Community Page +- Data of the Earth Systems Grid Federation hosted at the NCI ESGF Node +- Data of the fifth generation of ECMWF atmospheric reanalyses (ERA5) with more information on the NCI ERA5 Community Page - Data of the European Space Agency’s multi-petabyte Sentinel satellite data via the Sentinel Australasia Regional Access (SARA) NCI is also providing a user guide for finding, accessing, and citing data here. diff --git a/docs/model_evaluation/model_evaluation_on_gadi/index.md b/docs/model_evaluation/model_evaluation_on_gadi/index.md index e487b1c1b..17f0a33a6 100644 --- a/docs/model_evaluation/model_evaluation_on_gadi/index.md +++ b/docs/model_evaluation/model_evaluation_on_gadi/index.md @@ -30,6 +30,6 @@ At the moment, we are providing support for an the following model evaluation fr -The best way to get our help is by raising an issue on the [community forum](https://forum.access-hive.org.au/) with tags `help` and another tag for the specific framework. +The best way to get our help is by raising an issue on the community forum with tags `help` and another tag for the specific framework. In the future, we are also aiming to support a broader range of frameworks and recipes which are currently not supported (see [our community resource lists](../../community_resources/community_med/index.md) for this collection). \ No newline at end of file diff --git a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool.md b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool.md index aeb711f9a..79c5a2918 100644 --- a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool.md +++ b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool.md @@ -22,9 +22,9 @@ The Earth System Model Evaluation Tool (ESMValTool) is a community-development that aims at improving diagnosing and understanding of the causes and effects of model biases and inter-model spread. The ESMValTool mainly focus on evaluating results from the Coupled Model Intercomparison Project (CMIP) ensemble. The goal is to build a common framework for the evaluation of Earth System Models (ESMs) against observations available through the Earth System Grid Federation (ESGF) in standard formats (obs4MIPs) or made available at ESGF nodes. -More information on ESMValTool scope is available in the extensive [ESMValTool documentation][esmvaltool-doc]. +More information on ESMValTool scope is available in the extensive ESMValTool documentation. -ACCESS-NRI provides access to the latest version of ESMValTool via the xp65 access-med conda environment deployed on NCI-Gadi. +ACCESS-NRI provides access to the latest version of ESMValTool via the `xp65` access-med conda environment deployed on NCI-Gadi. Our plan is to routinely run benchmarks and comparisons of the ACCESS models CMIP submissions. We will also provide support for running recipes on NCI-Gadi. ## Running `esmvaltool` on Gadi @@ -61,15 +61,15 @@ esmvaltool run examples/recipe_python.yml --search_esgf=when_missing ## Support -ACCESS and NCI-Gadi users can get help from ACCESS-NRI for running their recipe on Gadi via Github Issue on the [ESMValTool-Workflow][esmvaltool-workflow-repository] github repository or by opening a thread on the [ACCESS-Hive Forum][access-hive]. +ACCESS and NCI-Gadi users can get help from ACCESS-NRI for running their recipe on Gadi via Github Issue on the ESMValTool-Workflow github repository or by opening a thread on the ACCESS-Hive Forum. -General support for ESMValTool (non-specific to NCI-Gadi) can be found in [ESMValTool Discussions page][esmvaltool-discussions] where users can open an issue and a member of the User Engagement Team of ESMValTool will reply as soon as possible. This is open for all general and technical questions on the ESMValTool: installation, application, development, or any other question or comment you may have. +General support for ESMValTool (non-specific to NCI-Gadi) can be found in ESMValTool Discussions page where users can open an issue and a member of the User Engagement Team of ESMValTool will reply as soon as possible. This is open for all general and technical questions on the ESMValTool: installation, application, development, or any other question or comment you may have. ### Recipes and diagnostics -Contacts for specific diagnostic sets are the respective authors, as listed in the corresponding [recipe and diagnostic documentation][esmvaltool-recipe-list] and in the source code. +Contacts for specific diagnostic sets are the respective authors, as listed in the corresponding recipe and diagnostic documentation and in the source code. -The current status of ESMValTool recipes for the xp64 conda environment is available [here][esmvaltool-workflow-repository] +The current status of ESMValTool recipes for the `xp65` conda environment is available here ## License @@ -79,12 +79,11 @@ The ESMValTool is released under the Apache License, version 2.0. Citation of th Besides the above citation, users are kindly asked to register any journal articles (or other scientific documents) that use the software at the ESMValTool webpage (http://www.esmvaltool.org/). Citing the Software Documentation Paper and registering your paper(s) will serve to document the scientific impact of the Software, which is of vital importance for securing future funding. You should consider this an obligation if you have taken advantage of the ESMValTool, which represents the end product of considerable effort by the development team. - ## ESMValTool recipe examples -To find the available recipes, please go see the [ACCESS ESMValTool Worflow recipe status][esmvaltool-workflow-repository] +To find the available recipes, please go see the ACCESS ESMValTool Worflow recipe status Below we showcase example recipes from `esmvaltool` that we are providing to run on Gadi: @@ -217,12 +216,11 @@ Below we showcase example recipes from `esmvaltool` that we are providing to run --> - -[esmvaltool-web]: https://www.esmvaltool.org/ -[esmvaltool-doc]: https://docs.esmvaltool.org/en/latest + diff --git a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb.md b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb.md index fa57020cc..dcf56fdb9 100644 --- a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb.md +++ b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb.md @@ -61,261 +61,8 @@ For our showcase, however, we are comparing the ACCESS Earth System Model versio We have performed a large amount of benchmark comparisons that were defined in the configuration file. We have organised the comparison of variables under different sections, like the Hydrology Cycle. For different variables, like the gross primary productivity `gpp`, we can compare to one or more datasets, like the gross primary productivity measurements of FLUXNET2015. Clicking on a row of the table will expand it to reveal the underlying datasets used. In the below table, the colormap extends from best values in purple to worse data in orange. -

Starting side of ilamb output

+

Starting side of ILAMB output

Clicking on one of these datasets, for example CERESed4.1, will take you to an interactive and quantitative comparison page for Albedo measurements of the Clouds and the Earth’s Radiant Energy System (CERES) project: -### 1.2 ILAMB_ROOT/DATA - -An extensive colletion of DATA is provided in the `ct11` project. You need to have [joined the project on NCI](https://my.nci.org.au/mancini/project-search) to get access to this data. - -To create a symbolic link to this data, use the bash command -``` -ln -s /g/data/ct11/access-nri/replicas/ILAMB/* $ILAMB_ROOT/DATA/ -ln -s /g/data/ct11/access-nri/replicas/IOMB/* $ILAMB_ROOT/DATA/ -``` -Note that the directory `WOA2018` is an overlapping catalogue (you can ignore the warning that a link already exists). -For more information on the data sets, please visit the the `ilamb` [dataset website](https://www.ilamb.org/datasets.html). - -### 1.3 ILMAB_ROOT/MODEL - -Similar to the observational data, we recommend to create symbolic link to model data within the $ILAMB_ROOT/MODEL directory. You can find models by searching our currated `intake` catalog [here](../model_evaluation_model_catalogs/index.md). To add model data (in our example models of the ACCESS-ESM1.5) for the analysis with `ilamb`, you need to do the following: -``` -ln -s /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r* $ILAMB_ROOT/MODELS -``` - -This will allow you, to simply use the `--model_root $ILAMB_ROOT/MODELS` keyword when using `ilamb`. - -Note that these different models have a lot of subdirectories, which are important to keep in mind when defining the `source` parameter in your `ilamb` `.cfg` file. Note that the `ilamb` files will end with `*.nc*. For example, one of the *rsus* files for run `r10i1p1f1` can be found (and used for `.cfg` under -``` -source = /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r1i1p1f1/Amon/rsus/gn/files/d20191115/rsus_Amon_ACCESS-ESM1-5_historical_r1i1p1f1_gn_185001-201412.nc -``` -or shorter -``` -source = $ILAMB_ROOT/MODELS/r1i1p1f1/Amon/rsus/gn/files/d20191115/rsus_Amon_ACCESS-ESM1-5_historical_r1i1p1f1_gn_185001-201412.nc -``` - -## 2 Setup Files - -At the beginning, we showed you the default call of `ilamb` via -``` -ilamb-run --config config.cfg --model_setup modelroute.txt --regions regions.txt -``` - -Here, we explain how you can setup all these files that are called via `--config`, `--model_setup`, and `--regions`. - -### 2.1 `config` files - -Now that we have the data, we need to setup a `config` file which the `ilamb` package will use to initiate a benchmark study. - -`ilamb` provides default config files [here](https://github.com/rubisco-sfa/ILAMB/tree/master/src/ILAMB/data). - -Below we explain both which variables you can define, but start by showing you the minimum setup from the [tutorial's](https://www.ilamb.org/doc/first_steps.html). `sample.cfg` [file](https://github.com/rubisco-sfa/ILAMB/blob/master/src/ILAMB/data/sample.cfg): - -**Minimum configure file with a direct and a derived variable** - -``` -# This configure file specifies the variables - -[h1: Radiation and Energy Cycle] - -[h2: Surface Upward SW Radiation] -variable = "rsus" - -[CERES] -source = "DATA/rsus/CERES/rsus_0.5x0.5.nc" - -[h2: Albedo] -variable = "albedo" -derived = "rsus/rsds" - -[CERES] -source = "DATA/albedo/CERES/albedo_0.5x0.5.nc" -``` - -In brief: This file allows you to create different header descriptions of the experiments (`h1`: top level for grouping of variables, `h2`: sub-level for each variable), but most importantly the `variable`s that we will look into and their `source`. In the eaxmple, `rsus` (*Surface Upward Shortwave Radiation*) and `albedo` are the used variables. The latter is actually derived from two variables by dividing the *Surface Upward Shortwave Radiation* by the *Surface Downward Shortwave Radiation*, `derived = rsus/rsds`. Finally, sources are defined as `source` with a text-font header without `h1` or `h2`. - -**Changing configure file variables** - -This is the list of all the variables you can modify in config file: -``` -source = None -#Full path to the observational dataset - -cmap = "jet" -#The colormap to use in rendering plots (default is 'jet') - -variable = None -#Name of the variable to extract from the source dataset - -alternate_vars = None -#Other accepted variable names when extracting from models - -derived = None -#An algebraic expression which captures how the confrontation variable may be generated - -land = False -#Enable to force the masking of areas with no land (default is False) - -bgcolor = "#EDEDED" -#Background color - -table_unit = None -#The unit to use when displaying output in tables on the HTML page - -plot_unit = None -#The unit to use when displaying output on plots on the HTML page - -space_mean = True -#Disable to compute sums of the variable over space instead of mean values - -relationships = None -#A list of confrontations with whose data we use to study relationships, the syntax is "h2 tag/observational dataset". You will see the relationship part in the output if you specify some relationship. - -ctype = None -#Choose a specific Confrontion class. - -regions = None -#Specify the regions of confrontation - -skip_rmse = False -#akip rmse in program - -skip_iav = True -#Ship iav in program - -mass_weighting = False -#if switch to true, using an average data in a period to normalize - -weight = 1 -# if a dataset has no weight specified, it is implicitly 1 - -``` - -### 2.2 `model_setup` file instead of `model_root` - -If you plan to run only a specific subset of models, you can already define them in a `modelroute.txt` file that is then called via the -``` ---model_setup modelroute.txt -``` -instead of using `--model_root` - -It could look like our specific example for running different versions (1, 2, and 3) of the ACCESS-ESM 1.5 suite: - -``` -# Model Name , ABOSLUTE/PATH/TO/MODELS , EXTRA COMMANDS -ACCESS_ESM1-5-r1i1p1f1 , MODELS/r1i1p1f1 , CMIP6 -ACCESS_ESM1-5-r2i1p1f1 , MODELS/r2i1p1f1 , CMIP6 -ACCESS_ESM1-5-r3i1p1f1 , MODELS/r3i1p1f1 , CMIP6 -... (abbreviated) -``` - -### 2.3 Other setup options - -There are many options to adjust the `ilamb` setup. You can find them on this [`ilamb` website](https://www.ilamb.org/doc/ilamb_run.html), including the `--build_dir` option to change the output directory. - -We want to specifically point towards the option to limit the analysis region. You can either define a region yourself or use predefined regions the following for Australia: -``` ---regions aust -``` - -## 3 Portable Batch System (PBS) jobs on NCI - -The following default PBS file, let's call it `ilamb_test.sh` can help you to setup your own, while making sure to use the correct project (#PBS -P) to charge your computing cost to: -``` -#!/bin/bash - -#PBS -N default_ilamb -#PBS -P tm70 -#PBS -q normalbw -#PBS -l ncpus=1 -#PBS -l mem=32GB -#PBS -l jobfs=10GB -#PBS -l walltime=00:10:00 -#PBS -l storage=gdata/xp65+gdata/ct11+gdata/fs38 -#PBS -l wd - -module use /g/data/xp65/public/modules -module load conda/access-med-0.1 - -export ILAMB_ROOT=$PWD/ILAMB_ROOT -export CARTOPY_DATA_DIR=/g/data/xp65/public/apps/cartopy-data - -ilamb-run --config cmip.cfg --model_setup $PWD/modelroute.txt --regions global -``` - -
-If you are not familiar with PBS jobs on NCI, you can find NCI's guide here. -
- -In brief: this PBS script (which you can submit via the bash command `qsub ilamb_test.sh`), will submit a job to Gadi with the job name (`#PBS -N`) *default_ilamb* under project (`#PBS -P`) `tm70` with a normal queue (`#PBS -q normalbw`), for 1 CPU (`#PBS -l ncpus=1`) with 32 GB RAM (`#PBS -l mem=32GB`), with an walltime of 10 hours (`#PBS -l walltime=00:10:00`) and access to 10 GB local disk space (`#PBS -l jobfs=10GB`) as well as data storage access to projects `xp65`, `ct11`, and `fs38` (again, note that you have to be [member of both projects on NCI](https://my.nci.org.au/mancini/project-search). Upon starting the job, it will change into to the working directory that you started the job from (`#PBS -l wd`) and load the access-med conda environment. Finally, it will export the $ILAMB_ROOT as well as $ARTOPY_DATA_DIR paths and start an `ilamb-run`. - -In our example, we actually run the `cmip.cfg` file from the `ilamb` [config file github repository](https://github.com/rubisco-sfa/ILAMB/blob/master/src/ILAMB/data/) for files spec - -Note: If your ILAMB_ROOT and CARTOPY_DATA_DIR are not in your directory from where you submitted the job from, then you need to adjust the export commands to their path -``` -export ILAMB_ROOT=/absolute/path/where/ILAMB_ROOT/actually/is -export CARTOPY_DATA_DIR=/absolute/path/where/shapefiles/actually/are -``` - -Once the jobs are finished, you can again inspect the outcome via an http server as described at the top of this tutorial - - -## 5 An example `ilamb` run - -When running, `ilamb` will the example configuration file that we provided above via -``` -ilamb-run --config sample.cfg --model_root $ILAMB_ROOT/MODELS/ --regions global -``` -`ilamb` should print the following messages while computing: -``` -Searching for model results in /Users/ncf/sandbox/ILAMB_sample/MODELS/ - - CLM40cn - -Parsing config file sample.cfg... - - SurfaceUpwardSWRadiation/CERES Initialized - Albedo/CERES Initialized - -Running model-confrontation pairs... - - SurfaceUpwardSWRadiation/CERES CLM40cn Completed 37.3 s - Albedo/CERES CLM40cn Completed 44.7 s - -Finishing post-processing which requires collectives... - - SurfaceUpwardSWRadiation/CERES CLM40cn Completed 3.3 s - Albedo/CERES CLM40cn Completed 3.3 s - -Completed in 91.8 s -``` -What happened here? First, the script looks for model results in the directory you specified in the `--model_root` option. It will treat each subdirectory of the specified directory as a separate model result. Here since we only have one such directory, `CLM40cn`, it found that and set it up as a model in the system. Next it parsed the configure file we examined earlier. We see that it found the CERES data source for both variables as we specified it. If the source data was not found or some other problem was encountered, the green `Initialized` will appear as red text which explains what the problem was (most likely `MisplacedData`). If you encounter this error, make sure that `ILAMB_ROOT` is set correctly and that the data really is in the paths you specified in the configure file. - -Next we ran all model-confrontation pairs. In our parlance, a confrontation is a benchmark observational dataset and its accompanying analsys. We have two confrontations specified in our configure file and one model, so we have two entries here. If the analysis completed without error, you will see a green `Completed` text appear along with the runtime. Here we see that `albedo` took a few seconds longer than `rsus`, presumably because we had the additional burden of reading in two datasets and combining them. - -The next stage is the post-processing. This is done as a separate loop to exploit some parallelism. All the work in a model-confrontation pair is purely local to the pair. Yet plotting results on the same scale implies that we know the maxmimum and minimum values from all models and thus requires the communcation of this information. Here, as we are plotting only over the globe and not extra regions, the plotting occurs quickly. - -## 6. Fix your setup with ilamb_doctor - -`ilamb_doctor ` is a script you can use to diagnosing some missing model values or what is incorrect or missing from a given analysis. It takes options similar to `ilamb-run` and is used in the following way: -```[ILAMB/test]$ ilamb-doctor --config test.cfg --model_root ${ILAMB_ROOT}/MODELS/CLM - -Searching for model results in /Users/ncf/ILAMB//MODELS/CLM - - CLM40n16r228 - CLM45n16r228 - CLM50n18r229 - -We will now look in each model for the variables in the ILAMB -configure file you specified (test.cfg). The color green is used to reflect -which variables were found in the model. The color red is used to -reflect that a model is missing a required variable. - - Biomass/GlobalCarbon CLM40n16r228 biomass or cVeg - ... (abbreviated) - Precipitation/GPCP2 CLM50n18r229 pr -``` -Here we have run the command on some inputs in our test directory. You will see a list of the confrontations we run and the variables which are required or their synonyms. What is missing in this tutorial is the text coloring which will indicate if a given model has the required variables. - -We have finish the introduction of basic `ilamb` usage. We believe you have some understanding of `ilamb` and cont wait to use it. if you still have any question or you want some developer level support, you can find more detail in their [official tutorial](https://www.ilamb.org/doc/tutorial.html). +

Comparison of different ILAMB outputs

\ No newline at end of file diff --git a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_metplus.md b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_metplus.md index 7027d8e1e..f5cf8e41f 100644 --- a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_metplus.md +++ b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_metplus.md @@ -1,6 +1,6 @@ # `METplus` on Gadi at NCI -[METplus](https://dtcenter.org/community-code/metplus) is the enhanced Model Evaluation Tools (METplus) verification system. +METplus is the enhanced Model Evaluation Tools (METplus) verification system. ???+ warning "Support Level: Supported on Gadi, but not owned by ACCESS-NRI" @@ -9,7 +9,7 @@ ACCESS-NRI does not own the code of METplus, but actively supports the use of METplus on Gadi. ACCESS-NRI provides access to the latest version of ESMValTool via the `access` conda environment deployed on NCI-Gadi. -For detailed information, tutorials and more of [METplus](https://metplus.readthedocs.io/en/latest/index.html), please go to the +For detailed information, tutorials and more of METplus, please go to the
@@ -21,7 +21,7 @@ For detailed information, tutorials and more of [METplus](https://metplus.readth ## What is METplus? -[METplus](https://dtcenter.org/community-code/metplus) is a verification framework that spans a wide range of temporal (warn-on-forecast to climate) and spatial (storm to global) scales. It is intended to be extensible through additional capability developed by the community The core components of the framework include the [Model Evaluation Tools (MET)](https://met.readthedocs.io/en/latest/), the associated database and display systems called METviewer and METexpress, and a suite of Python wrappers to provide low-level automation and examples, also called use-cases. METplus will be a component of NOAA's Unified Forecast System (UFS) cross-cutting infrastructure as well as NCAR's System for Integrated Modeling of the Atmosphere (SIMA). +METplus is a verification framework that spans a wide range of temporal (warn-on-forecast to climate) and spatial (storm to global) scales. It is intended to be extensible through additional capability developed by the community The core components of the framework include the Model Evaluation Tools (MET), the associated database and display systems called METviewer and METexpress, and a suite of Python wrappers to provide low-level automation and examples, also called use-cases. METplus will be a component of NOAA's Unified Forecast System (UFS) cross-cutting infrastructure as well as NCAR's System for Integrated Modeling of the Atmosphere (SIMA). ## Showcase of METplus 5.0 @@ -31,7 +31,7 @@ module use /g/data/access/ngm/modules module load envs/metplus/5.0 ``` -1. Download the sample data from [https://dtcenter.ucar.edu/dfiles/code/METplus/METplus_Data/v5.0/sample_data-met_tool_wrapper-5.0.tgz](https://dtcenter.ucar.edu/dfiles/code/METplus/METplus_Data/v5.0/sample_data-met_tool_wrapper-5.0.tgz) and untar into a directory on Gadi, for example `~/METplus`. +1. Download the sample data from https://dtcenter.ucar.edu/dfiles/code/METplus/METplus_Data/v5.0/sample_data-met_tool_wrapper-5.0.tgz and untar into a directory on Gadi, for example `~/METplus`. 2. Create a configuration file `local.conf` containing the input and output paths, for example `INPUT_BASE=~/METplus`. @@ -41,7 +41,7 @@ INPUT_BASE=/path/to/metplus_inputs OUTPUT_BASE=/path/to/outputs ``` -3. Save the demo configuration (e.g. `ASCII2NC.conf` from [this METPlus example](https://metplus.readthedocs.io/en/latest/generated/met_tool_wrapper/ASCII2NC/ASCII2NC.html#sphx-glr-generated-met-tool-wrapper-ascii2nc-ascii2nc-py)) to a local file +3. Save the demo configuration (e.g. `ASCII2NC.conf` from this METPlus example to a local file 4. Run METplus passing it both local.conf and the demo configuration diff --git a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima.md b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima.md index d056b304f..95176ad54 100644 --- a/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima.md +++ b/docs/model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima.md @@ -9,17 +9,17 @@ COSIMA is the Consortium for Ocean-Sea Ice Modelling in Australia, which brings together Australian researchers involved in global ocean and sea ice modelling. The consortium provides a collection of `cosmia-recipes` for the evaluation of ocean-sea ice modelling that are currated for you on Gadi. -The COSIMA Cookbook is a framework for analysing output from ocean-sea ice models. The focus is on the [ACCESS-OM2](../../models/configurations/access-om.md) suite of models being developed and run by members of [COSIMA]((http://cosima.org.au/)). But this framework is suited to analysing any MOM5/MOM6 output, as well as output from other models. +The COSIMA Cookbook is a framework for analysing output from ocean-sea ice models. The focus is on the [ACCESS-OM2](../../models/configurations/access-om.md) suite of models being developed and run by members of COSIMA. But this framework is suited to analysing any MOM5/MOM6 output, as well as output from other models. ## Getting Started -The easiest way to use the COSIMA Cookbook is through the [Australian Research Environment (ARE)](https://are.nci.org.au) access of the [National Computational Infrastructure](https://nci.org.au). Here, we assume that you already [got started](../../getting_started/index.md), that is, you have an NCI account and can log onto Gadi via secure shell (ssh). +The easiest way to use the COSIMA Cookbook is through the Australian Research Environment (ARE) access of the National Computational Infrastructure. Here, we assume that you already [got started](../../../getting_started), that is, you have an NCI account and can log onto Gadi via secure shell (ssh). -To use the COSIMA Cookbook that is preinstalled in the `conda/analysis3` of NCI proejct `hh5`, you need to [join NCI project `hh5`](https://my.nci.org.au/mancini/project/hh5). +To use the COSIMA Cookbook that is preinstalled in the `conda/analysis3` of NCI proejct `hh5`, you need to join NCI project `hh5`. -1. Log onto Gadi via secure shell (ssh) and clone the cosima-recipes repository to your local file space. +1. Log onto Gadi via secure shell (ssh) and clone the cosima-recipes repository to your local file space. 2. Check out the recipes that you want to run, and make sure that you have access to the specific projects and their storage (e.g. project `ik11` to get access to `/g/data/ik11`). -3. Start an [ARE JupyterLab session on NCI](https://are.nci.org.au): +3. Start an ARE JupyterLab session on NCI: **Storage**: gdata/hh5 (add the specific storage that you need for the recipes you want to run) **Module directories**: /g/data/hh5/public/modules **Modules**: conda/analysis3 @@ -28,10 +28,10 @@ To use the COSIMA Cookbook that is preinstalled in the `conda/analysis3` of NCI ## More information about the Cookbook -For more information, we refer to the [Cookbook github repository](https://github.com/COSIMA/cosima-cookbook) as well as a list of recipes: +For more information, we refer to the Cookbook github repository as well as a list of recipes: -- [Tutorials](https://github.com/COSIMA/cosima-recipes/tree/main/Tutorials), -- [Notebooks](https://github.com/COSIMA/cosima-recipes/tree/main/ACCESS-OM2-GMD-Paper-Figs) to reproduce figures of the [ACCESS-OM2 announcement paper](https://gmd.copernicus.org/articles/13/401/2020/), -- [Documented Example](https://github.com/COSIMA/cosima-recipes/tree/main/DocumentedExamples), and -- [Contributed Examples](https://github.com/COSIMA/cosima-recipes/tree/main/ContributedExamples) +- Tutorials, +- Notebooks to reproduce figures of the ACCESS-OM2 announcement paper, +- Documented Example, and +- Contributed Examples