diff --git a/development_site/models/model_components/coupler/index.html b/development_site/models/model_components/coupler/index.html index 8e1165d39..bd55a2cd1 100644 --- a/development_site/models/model_components/coupler/index.html +++ b/development_site/models/model_components/coupler/index.html @@ -1097,7 +1097,7 @@
  • - NUOPC interoperability layer {{ recommended }} + NUOPC Interoperability Layer {{ recommended }}
  • @@ -2353,7 +2353,7 @@
  • - NUOPC interoperability layer {{ recommended }} + NUOPC Interoperability Layer {{ recommended }}
  • @@ -2394,17 +2394,18 @@

    Coupler {{ su

    -

    A coupler is a software used to perform simulations with different model components at the same time. The coupler enables the different model components to exchange information during the simulation.

    +

    A coupler is a software package that allows synchronised exchanges of coupling information between numerical codes representing different components of the climate system.

    OASIS3-MCT {{ supported }}

    -

    OASIS3-MCT consists of the OASIS coupler interfaced with the Model Coupling Toolkit (MCT) from the Argonne National Laboratory. OASIS3-MCT is the coupler used for:

    +

    OASIS3-MCT is the version of the Ocean Atmosphere Sea Ice Soil (OASIS) coupler interfaced with the Model Coupling Toolkit (MCT) from the Argonne National Laboratory. OASIS3-MCT is the coupler used in the configurations:

    - -

    The NUOPC interoperability layer is distributed via the Earth System Modelling Framework (ESMF). It is a coupler developed by the National Unified Operational Prediction Capability (NUOPC), a consortium of Navy (USA), NOAA and Air Force (USA) modelers.

    + +

    NUOPC (National Unified Operational Prediction Capability) Interoperability Layer defines conventions and a set of generic components for building coupled models using the Earth System Modeling Framework (ESMF).

    +

    ACCESS-OM3, a configuration currently under development, uses NUOPC to couple its MOM6 and CICE6 model components as there are no respective OASIS coupling interfaces for these components.


    diff --git a/development_site/search/search_index.json b/development_site/search/search_index.json index 0ed7d5ea1..f01fe6864 100644 --- a/development_site/search/search_index.json +++ b/development_site/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Index","text":""},{"location":"#welcome-to-access-hive","title":"Welcome to ACCESS-Hive","text":"ACCESS-Hive is a portal to all documentation relevant to the Australian Community Climate and Earth System Simulator, ACCESS, and the wider ACCESS community. ACCESS-Hive is developed for and by the ACCESS community following an open-source development model."},{"location":"#navigating-access-hive","title":"Navigating ACCESS-Hive","text":"Models Run a Model Model Evaluation Community Resources Community Forum"},{"location":"#about","title":"About","text":"The documentation on Hive is work in progress!

    The ACCESS-Hive is a community resource that is a work in progress. We\u2019d love to receive your contribution. Please see the contributing guidelines below for how to make contributions to the Hive page content. You can also open an issue highlighting any content you\u2019d like us to provide but aren\u2019t able to contribute yourself.

    "},{"location":"#support","title":"Support","text":"

    There is a system of tags to identify who supports the linked documentation or software, and the level of support you can expect:

    • Supported by ACCESS-NRI {{ supported }}

    • Recommended by ACCESS-NRI {{ recommended }}

    • Community contributed {{ community }}

    See the support page for details about the support levels: what is supported, by who, and how to access help.

    "},{"location":"#contribute-to-access-hive-1","title":"Contribute to ACCESS-Hive 1","text":"Contribute Join the ACCESS-Hive team and have your contributions onboard!"},{"location":"#acknowledgement-of-country","title":"Acknowledgement of Country","text":"

    We at ACCESS-NRI acknowledge the Traditional Owners of the land on which our research infrastructure and community operate across Australia and pay our respects to Elders past and present. We recognise the thousands of years of accumulated knowledge and deep connection they have with all the Earth systems we simulate.2

    1. Image by pch.vector on Freepik\u00a0\u21a9

    2. Photo by Ren\u00e9 Riegal on Unsplash \u21a9

    "},{"location":"call_contribute/","title":"Call contribute","text":"The documentation on Hive is work in progress!

    The ACCESS-Hive is a community resource that is a work in progress. We\u2019d love to receive your contribution. Please see the contributing guidelines below for how to make contributions to the Hive page content. You can also open an issue highlighting any content you\u2019d like us to provide but aren\u2019t able to contribute yourself.

    "},{"location":"about/License/","title":"License","text":"

    The ACCESS-Hive is made available under the Creative Commons Attribution license. The following is a human-readable summary of (and not a substitute for) the full legal text of the CC BY 4.0 license.

    You are free:

    • to Share---copy and redistribute the material in any medium or format
    • to Adapt---remix, transform, and build upon the material

    for any purpose, even commercially.

    The licensor cannot revoke these freedoms as long as you follow the license terms.

    Under the following terms:

    • Attribution---You must give appropriate credit (mentioning that your work is derived from work that is Copyright \u00a9 ACCESS-NRI and, where practical, linking to https://www.access-nri.org.au/), provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

    • No additional restrictions---You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. With the understanding that:

    Notices:

    • You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.
    • No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
    "},{"location":"about/code_of_conduct/","title":"Code of Conduct","text":"

    ACCESS-Hive is an open community supported effort. For it to be successful it must be a welcoming and inclusive space so that everyone in the community feels able to contribute.

    To ensure this is the case users and contributors to ACCESS-Hive and the ACCESS-Hive Forum are required to abide by the ACCESS-NRI code of conduct.

    "},{"location":"about/contact/","title":"Contact","text":"

    ACCESS-Hive is an initiative of the The Australian Earth-System Simulator (ACCESS-NRI).

    ACCESS-Hive is an open community supported effort. The underpinning infrastructure is provided by ACCESS-NRI but much of the content is provided by the ACCESS Community.

    If there are problems or queries about the content of ACCESS-Hive check if there is already a relevant open issue on the ACCESS-Hive GitHub repository, and open one if not.

    Check the support page for information on what is supported and by whom.

    Join the ACCESS-Hive forum and find previous related discussions about the hive or the resources listed here, or start your own and make contacts with your community.

    Otherwise, contact ACCESS-NRI directly. Full contact details for ACCESS-NRI are available on the ACCESS-NRI website contact page

    "},{"location":"about/contact/#other-places-where-you-can-find-the-access-nri-team","title":"Other places where you can find the ACCESS-NRI team:","text":"

    : ACCESS-Hive Forum

    : ACCESS-NRI GitHub

    : @ACCESS_NRI twitter

    : access_nri LinkedIn

    : access-nri.slack.com

    "},{"location":"about/policies/","title":"Policies {{ supported }}","text":"
    • Procedures and Practices: Contains documents outlining how ACCESS-NRI will function. These documents describe what users can expect and justifications for the decisions against criteria that are based on the values of the organisation.
    "},{"location":"about/support/","title":"Support","text":""},{"location":"about/support/#support-levels","title":"Support levels","text":"

    The site uses a system of tags to identify who supports the linked documentation or software, and the level of support you can expect:

    Supported by ACCESS-NRI {{ supported }}

    Documentation that is actively maintained and supported by ACCESS-NRI. This is documentation that was either created by ACCESS-NRI, or it is existing documentation for which ACCESS-NRI has taken over responsibility.

    Recommended by ACCESS-NRI {{ recommended }}

    Documentation for third-party software that ACCESS-NRI recommends and facilitates the use of at NCI as a service to the community. This means ACCESS-NRI supports the infrastructure required to run the software, but not necessarily the software itself.

    Community contributed {{ community }}

    Documentation that is of use to the community, but is not explicitly endorsed or supported by ACCESS-NRI.

    "},{"location":"about/support/#how-to-get-help","title":"How to get help","text":"

    Each entry on ACCESS-Hive links to another web site. There should be information on how to get help on the linked site. If there are no obvious channels for help, or the help is not adequate consider asking for assistance from fellow members of your community on the ACCESS-Hive forum.

    In the case of ACCESS-NRI supported documentation and software, marked {{ supported }}, if there is no information on how to get help, or your query is not appropriate for the support channels provided, please either ask on the ACCESS-Hive forum or contact ACCESS-NRI directly.

    "},{"location":"community_resources/","title":"Community Resources","text":"

    In this area of the Hive, we collect content that is not currated by us, but may be helpful for the community. You can contribute to this part of the Hive too!

    Currently, we provide pointers to the following categories: - Working Groups - Glossaries - Variables - Model Evaluation Links - Training - Events

    "},{"location":"community_resources/community_data_processing/","title":"Community Processing Data Processing Tools","text":""},{"location":"community_resources/community_data_processing/#tools","title":"Tools","text":""},{"location":"community_resources/community_data_processing/#kerchunk-community","title":"Kerchunk {{ community }}","text":"

    Documentation | Sources

    Kerchunk is a library that provides a unified way to represent a variety of chunked, compressed data formats (e.g. NetCDF/HDF5, GRIB2, TIFF, \u2026), allowing efficient access to the data from traditional file systems or cloud object storage. It also provides a flexible way to create virtual datasets from multiple files.

    "},{"location":"community_resources/community_data_processing/#cmor3-community","title":"CMOR3 {{ community }}","text":"

    Climate Model Output Rewriter Version 3

    Documentation | Sources

    CMOR is used to produce CF-compliant netCDF files. The structure of the files created by CMOR and the metadata they contain fulfill the requirements of many of the climate community\u2019s standard model experiments (which are referred to here as \u201cMIPs\u201d and include, for example, AMIP, PMIP, APE, and IPCC scenario runs).

    "},{"location":"community_resources/community_data_processing/#xmip-community","title":"xMIP {{ community }}","text":"

    Documentation | Tutorial on NCI | Sources

    This package facilitates the cleaning, organization and interactive analysis of Model Intercomparison Projects (MIPs) within the Pangeo software stack.

    "},{"location":"community_resources/community_data_processing/#app4-the-access-post-processor-community","title":"APP4 (The ACCESS Post Processor) {{ community }}","text":"

    Documentation | Sources

    The APP4 is a CMORisation tool designed to convert ACCESS model output to ESGF-compliant formats, primarily for publication to CMIP6. The code was originally built for CMIP5, and was further developed for CMIP6-era activities. Uses CMOR3 and files created with the CMIP6 data request to generate CF-compliant files according to the CMIP6 data standards.

    "},{"location":"community_resources/community_data_processing/#access-archiver-community","title":"ACCESS-Archiver {{ community }}","text":"

    Documentation | Sources

    The ACCESS Archiver is designed to archive model output from ACCESS simulations. It's focus is to copy ACCESS model output from its initial location to a secondary location (typically from /scratch to /g/data), while converting UM files to netCDF, compressing MOM/CICE files, and culling restart files to 10-yearly. Saves 50-80% of storage space due to conversion and compression.

    "},{"location":"community_resources/community_data_processing/#synda-recommended","title":"Synda {{ recommended }}","text":"

    synda is a command line tool to search and download files from the Earth System Grid Federation (ESGF) archive.

    "},{"location":"community_resources/community_data_processing/#fluxnetlsm-community","title":"FluxnetLSM {{ community }}","text":"

    Citation 1 | Sources

    R package for post-processing FLUXNET datasets for use in land surface modelling. Performs quality control and data conversion of FLUXNET data and collated site metadata. Supports FLUXNET2015, La Thuile, OzFlux and ICOS data releases.

    "},{"location":"community_resources/community_data_processing/#metpy-community","title":"Metpy {{ community }}","text":"

    https://unidata.github.io/MetPy/latest/examples/formats/index.html

    Documentation | Sources

    MetPy is a collection of tools in Python for reading, visualizing, and performing calculations with weather data. MetPy supports Python >= 3.8 and is freely available under a permissive open source license.

    Format types are: GINI Water Vapor Imagery, NEXRAD Level 3 File, and NEXRAD Level 2 File.

    "},{"location":"community_resources/community_data_processing/#xskillscore-community","title":"xskillscore {{ community }}","text":"

    Documentation | Sources

    xskillscore is a Python library for computing a wide variety of skill metrics. Its typical application is to verify deterministic and probabilistic forecasts relative to observations.

    "},{"location":"community_resources/community_data_processing/#analysis-blogposts-and-tutorials-community","title":"Analysis blogposts and tutorials {{ community }}","text":"

    Accessing NetCDF and GRIB file collections as cloud-native virtual datasets using Kerchunk, Peter March, Sep 2022

    1. A. M. Ukkola, N. Haughton, M. G. De Kauwe, G. Abramowitz, and A. J. Pitman. Fluxnetlsm r package (v1.0): a community tool for processing fluxnet data for use in land surface modelling. Geoscientific Model Development, 10(9):3379\u20133390, 2017. URL: https://gmd.copernicus.org/articles/10/3379/2017/, doi:10.5194/gmd-10-3379-2017.\u00a0\u21a9

    "},{"location":"community_resources/community_med_recipes/","title":"Community Model Evaluation and Diagnostics (MED) Recipe Gallery","text":"

    We are trying to ingest more and more model evaluation and diagnostics recipes in your currated recipe gallery on this website {{ supported }}. While this is a continous effort, this site is intented for a list of model evaluation and diagnostics recipes that are not (yet) ingested but may be interesting for the community {{ community }}:

    MED Recipe Components Description ESMValTool {{ recommended }} (Earth System Model EValuation Tool) Documentation | Tutorial | Source Code COSIMA Cookbook / Recipes {{ recommended }}(Consortium for Ocean-Sea Ice Modelling in Australia) Documentation | Tutorial | Source Code | Recipes iLAMB {{ recommended }}(international Land Model Benchmarking) Documentation | Tutorial | Source Code iOMB {{ recommended }}(international Ocean Model Benchmarking) Documentation | Tutorial | Source Code METPlus {{ recommended }}(Model Evaluation Tool Plus) Tutorial | Paper PMP {{ recommended }}(PCMDI Metrics Package) Documentation | Source Code climpred {{ community }} Documentation | Tutorial | Source Code | Paper FREVA {{ community }}(Free Evaluation System Framework) Documentation | Source Code TECA {{ community }}(Toolkit for Extremes Climate Analysis) Documentation | Tutorial | Source Code MONET {{ community }}(Model and ObservatioN Evaluation Toolkit) Documentation | Tutorial | Source Code | Paper LIVVkit {{ community }}(land ice verification & validation toolkit) Documentation | Tutorial | Source Code CSET {{ community }}(Convective Scale Evaluation Tool ) Documentation | Tutorial | Source Code | MetPy {{ community }}(Model Evaluation Tool Plus) Tutorial | Source Code | Recipes MetPy is a collection of tools in Python for reading, visualizing, and performing calculations with weather data. MetPy supports Python >= 3.8 and is freely available under a permissive open source license. Afterburner {{ community }} Documentation | Source"},{"location":"community_resources/community_model_catalogs/","title":"Community Model Data Catalogs","text":"

    We are trying to ingest more and more model data catalogs in your currated catalog on this website}. While this is a continous effort, this site is intented for a list of additional model data catalogs that are not (yet) ingested but are recommended by us ({{ recommended }}) or may be interesting for the community ({{ community }}):

    Model Catalog Comp. Description NCI datasets {{ recommended }} NCI has an extensive catalog of datasets of interest to the weather and climate community. These datasets are directly available on the NCI supercomputer and the [Australian Research Environment](https://opus.nci.org.au/display/Help/ARE+User+Guide) CLEX NCI Data Collection Intake Catalogue {{ recommended }} This is an Intake catalogue maintained by the ARC Centre of Excellence for Climate Extremes [(CLEX)](https://climateextremes.org.au/). Only datasets from the NCI Catalog are referenced. The catalogue is available in intake's default catalogue list in the CLEX Conda environment. Two notebooks are provided in the docs folder showing how to access the ERA5 and CIP6 datasets. Australia Climate Data Guide Catalogue {{ recommended }} *A one-stop catalogue to discover Climate Data in Australia* The ACDG portal is a metadata portal listing climate research resources available in Australia from multiple data repositories. This is a community based project managed by the ACDG Single Access working group. This is a group of Australian climate community self-nominated representatives. Anyone is welcome to join the group or to contribute independently to the metadata portal the group is developing. Australian Ocean Data Network {{ recommended }} The Australian Ocean Data Network (AODN) is an interoperable online network of marine and climate data resources. IMOS and the 6 Australian Commonwealth agencies ([see AODN Partners](https://imos.org.au/facilities/aodn/aodn-data-management/aodn-partners)) form the core of the AODN. Increasingly, though, universities and State government offices are offering up data resources to the AODN, and delivery of data to the AODN is being written in to significant research programs e.g. [National Environmental Science Program Marine Biodiversity Hub](http://www.nespmarine.edu.au/) and the [Great Australian Bight research program](http://www.misa.net.au/GAB). Intake-Ilamb Catalog {{ supported }} The Intake-Ilamb catalog provides an yaml-style intake catalog of the reference data used for ESM model benchmarking in the International Land Model Benchmarking [(ILAMB)](https://www.ilamb.org/) effort. FLUXNET {{ community }} FLUXNET is an international \u201cnetwork of networks,\u201d tying together regional networks of earth system scientists. FLUXNET scientists use the eddy covariance technique to measure the cycling of carbon, water, and energy between the biosphere and atmosphere. Scientists use these data to better understand ecosystem functioning, and to detect trends in climate, greenhouse gases, and air pollution. CEDA Archive {{ community }} The CEDA Archive forms part of NERC's Environmental Data Service (EDS) and is responsible for looking after data from atmospheric and earth observation research. They host over 18 Petabytes of data from climate models, satellites, aircraft, met observations, and other sources. OZFlux {{ community }} OzFlux is an ecosystem research network set up to provide Australian, New Zealand and global ecosystem modelling communities with consistent observations of energy, carbon and water exchange between the atmosphere and key Australian and New Zealand ecosystems. Australian Community Reference Climate Data Collection {{ recommended }} This collection is a collaborative effort between the Australian Climate Service (ACS), ARC Centre of Excellence for Climate Extremes (CLEX) and the wider Australian climate research community to re-establish and maintain a reference dataset collection at NCI. An [intake-esm catalogue](https://github.com/aus-ref-clim-data-nci/acs-replica-intake) is also available to facilitate data access."},{"location":"community_resources/community_observational_catalogs/","title":"Community Observational Data Catalogs","text":"

    We are trying to ingest more and more model data catalogs in your currated catalog on this website}. While this is a continous effort, this site is intented for a list of additional model data catalogs that are not (yet) ingested but are recommended by us ({{ recommended }}) or may be interesting for the community ({{ community }}):

    Data Catalog Comp. Description Copernicus Climate Change Service (C3S) Data Store (CDS) {{ recommended }} The Copernicus Climate Change Service (C3S) combines observations of the climate system with the latest science to develop authoritative, quality-assured information about the past, current and future states of the climate in Europe and worldwide. C3S data is provided via its Climate Data Store (CDS). You can search its available datasets via this interface. You can use the CDS API as well as command line tools to download data. To download ERA5 from CDS, you can use for example this era5cli command line tool. Catalogue Search at CEDA (Centre for Environmental Data Analysis) {{ recommended }} The CEDA (Centre for Environmental Data Analysis) Archive hosts atmospheric and earth observation data. It provids an interactive Catalogue Search and Tools for downloading data. It holds environmental data related to atmospheric and earth observation fields. Our remit covers the following areas (see linked examples to some of our most popular datasets): - Climate - e.g. HadUK Grid, CMIP, CRU - Composition - e.g. CCI - Observations - e.g. MIDAS Open - Numerical weather prediction - e.g. Met Office NWP - Airborne - e.g. FAAM - Satellite data and imagery - e.g. Sentinel"},{"location":"community_resources/community_working_groups/","title":"Community Working Groups","text":"

    The ACCESS Community and the ACCESS-NRI have established Community Working Groups to assess and prioritise the needs of the modelling community as well as encourage collaboration within. These working groups are open to the community and welcome new members.

    The working group activities are coordinated through the ACCESS Hive Community Forum:

    Atmospheric Modelling Land Surface Modelling COSIMA(Ocean and Sea-Ice) Forecasting and Prediction Earth System Modelling Cryosphere

    To join a working group follow the instructions on the ACCESS-NRI website, and to participate in the activities of the working group visit the ACCESS-hive forum.

    "},{"location":"community_resources/events/","title":"Workshops and Conferences","text":"

    {{ events_content }}

    "},{"location":"community_resources/events/add_event/","title":"Workshops and Conferences: Add Event","text":"

    We encourage members of the community to list any workshops, tutorials, conferences that might be of interest to the community.

    "},{"location":"community_resources/events/add_event/#how-to-add-your-event","title":"How to add your event","text":""},{"location":"community_resources/events/add_event/#add-an-issue","title":"Add an issue","text":"

    The easiest way for you to add your event is to make an issue with the template provided. This provides a form which guides you through the process of providing the required information.

    "},{"location":"community_resources/events/add_event/#create-a-pull-request-to-add-your-event","title":"Create a pull-request to add your event","text":"

    This process requires some knowledge of git, GitHub and Markdown. If you do not feel comfortable doing this then it is sufficient to just add an issue as above. The issue will be assigned to someone else to finish.

    If you do feel confident adding your event to the list, then create a Markdown text file, identified with the .md extension, to the correct subdirectory in the events folder of the ACCESS-Hive repository. The subdirectories are named by year, put your new file in the year in which the event will take place. Avoid spaces in your filename: use an underscore _ where you would normally have a space. e.g. regional_dowscaling_cordex.md

    The file must contain a header with the metadata as in the example below:

    ---\ntitle: Regional climate downscaling for Australia within the CORDEX framework\nstart_date: 27/11/2022\nend_date: 27/11/2022\nlocation: Adelaide, SA\nlink: https://www.amos2022.org.au/\ndescription: This workshop is relevant for those performing regional climate simulations or downscaling with empirical/statistical downscaling approaches including machine learning, as well as those using regional climate projection data in their work. The focus will be on CORDEX related data and modelling. The workshop will have some presentations with extended discussion.\n---\n

    Make sure to follow all the steps described in the contribution guidelines to submit this addition for approval for publication.

    "},{"location":"community_resources/events/events/2022/CORDEXAmos2022/","title":"Regional climate downscaling for Australia within the CORDEX framework","text":"

    This workshop is relevant for those performing regional climate simulations or downscaling with empirical/statistical downscaling approaches including machine learning, as well as those using regional climate projection data in their work. The focus will be on CORDEX related data and modelling. The workshop will have some presentations with extended discussion. Some topics to be covered include: - Accessing the existing CORDEX-CMIP5 data. How to access and use the data - Explain the CORDEX-CMIP6 protocol - What does it say? How can you contribute? - Who is planning to contribute (or is already working on contributions) to the Australasia domain?

    "},{"location":"community_resources/events/events/2022/GC5Workshop/","title":"GC5 Assessment Workshop","text":"

    The UM Partnership Team and GC Programme Team are running a GC5 assessment workshop, to assess the latest configuration of the Global Coupled model.

    The workshop will be a hybrid event with an option to attend online or in-person at Met Office Collaboration Building, Exeter. We will discuss the assessment of the latest GC5 configuration in a range of model simulations in a seamless context and sessions will broadly consist of: - Summary of GC5 physics changes - General model assessment - Summary from Priority Evaluation Groups (PEGs) - Summary from Collaboration Groups (CoGs) - Upcoming changes in GC science and tools - Discussions

    Please fill in the registration form to confirm attendance by 21st October

    For any further questions please contact Luke Roberts, Prince Xavier or Charline Marzin at the Met Office.

    "},{"location":"community_resources/events/events/2022/GroundTruthingClimateChange/","title":"Ground truthing future climate change","text":"

    Scientific ocean drilling provides the robust baseline data on global climate evolution over extended geologic time periods that are critical for improving climate model performance. By targeting how the climate system operates across a wide array of past climate states, scientific ocean drilling has, and continues to, obtain the data necessary to calibrate and improve numerical models used to project future climate impacts and inform mitigation strategies.

    Join us in this session where we aim to connect climate and ocean modellers to our rich (50+ years of drilling) database and unanswered questions in scientific ocean drilling. By addressing key questions about Earth\u2019s past, present, and future through interdisciplinary research, we are aiming to spark new collaborations and proposals that will lead to a more profound understanding of Earth as one integrated, interconnected system.

    "},{"location":"community_resources/events/events/2022/SWOTAmos2022/","title":"The Surface Water and Ocean Topography (SWOT) satellite: A primer","text":"

    The Surface Water and Ocean Topography (SWOT) satellite, which will launch in November 2022, is a ground-breaking wide-swath altimetry mission that will observe fine details of the ocean dynamics at a resolution 10 times finer than current satellites. SWOT is jointly developed by NASA and CNES with contributions from researchers around the world, including Australia. The Australian government, the Integrated Marine Observing System, and the Australian marine science community are investing in SWOT through calibration/validation and synergistic in situ measurements of fine-scale ocean dynamics in the Australian region. This workshop will present a primer on the principles of the satellite and instrument, how it works, and what are its possibilities and limitations compared to existing altimetry products. This will be complemented with a brief summary of ocean research related to and enabled by SWOT, including internal waves and tides, sub-mesoscale dynamics, the geoid, and mean dynamic topography. The goal of the workshop is to provide oceanographers, hydrologists and other users of altimetry data with the information they need to prepare for the arrival of SWOT data in late 2023.

    "},{"location":"community_resources/glossaries/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    {{ community }} CMIP(6):

    {{ community }} ERA(5):

    {{ community }} UM:

    "},{"location":"community_resources/glossaries/glossary_access_nri/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_cmip/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_cssr/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_era/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_ipcc/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_um/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/training/","title":"Training and Policies","text":"

    This space is intended to promote training material relevant to ACCESS and its community. The training material can be directly relevant to ACCESS and its model components, such as:

    • using coupled models and model components
    • using configurations
    • using model evaluation tools and workflows

    It is also intended for training material around more peripheral topics that are essential for the community, such as:

    • HPC
    • version control
    • essential software packages

    ACCESS-NRI encourages the members of the community to contact us to share their suggestions.

    Finally, you will also find ACCESS-NRI's policies in this space.

    "},{"location":"community_resources/training/ACCESS_training/","title":"ACCESS Training Material","text":"

    This page is intended to provide access to training material directly related to ACCESS models and model components. This material can cover topics such as but not limited to:

    • how to use a specific model
    • how to modify a model configuration
    • how to test a model modification or validate a model run
    "},{"location":"community_resources/training/ACCESS_training/#jules-tutorials-recommended","title":"JULES tutorials {{ recommended }}","text":"

    The JULES tutorials explain how to use FCM, Rose and Cylc both for using the model and for development work. They can be useful to ACCESS users as practical demonstrations of the Rose and Cylc infrastructure.

    "},{"location":"community_resources/training/additional_training/","title":"Additional training material","text":"

    To learn the basics of Git and GitHub. It also includes ACCESS-NRI's recommendations to setup GitHub.

    The National Computational Infrastructure (NCI) provides training resources and in-person training courses throughout the year to help develop the skills of the NCI user community.

    A full calendar of upcoming training opportunities can be found on their Opus page.

    Users can find important information and resources about using NCI systems and services in the NCI User Guides.

    "},{"location":"contribute/","title":"How to Contribute","text":"

    ACCESS-Hive is a community supported site, as such contributions to the ACCESS-Hive site are encouraged by any member of the community. Members of the ACCESS community are also welcome to become reviewers. Please refer to the following contribution guidelines to learn how you can help the ACCESS community build a documentation database useful to everyone.

    Although we encourage everyone to get involved and contribute to the ACCESS-Hive in order to adequately represent the needs of the entire ACCESS community, we recognise not everyone will have the time to do so. In case you do not have a lot of time, please consider sharing your ideas via issues on the ACCESS-Hive GitHub repository so someone might be able to add them to the ACCESS-Hive site for you.1

    Abstract

    The aim of this how-to is to enable you to:

    • add or modify a link to a new documentation in an existing page
    • contribute complex modifications, eg., add pages, modify the navigation, modify an existing page in depth etc.
    • how to deal with relevant documentation that is not currently on a website
    "},{"location":"contribute/#become-a-member-of-the-access-hive-organisation","title":"Become a member of the ACCESS-Hive organisation","text":"

    The ACCESS-Hive organisation is open to any member of the ACCESS community. Furthermore, organisation members have write access to the ACCESS-Hive repository which simplifies the process to contribute. Members can work from branches that contain their modifications instead of creating and maintaining their own forks.

    As such, we encourage you to become a member of the organisation by replying to this issue and ask to be invited to join the organisation.

    "},{"location":"contribute/#process-to-contribute","title":"Process to contribute","text":"

    This documentation is based on the Material for Mkdocs theme. Please see the documentation for the theme or for Mkdocs for a full explanation of all the capabilities.

    The documentation is written in Markdown format. Please see this cheat sheet for a quick reference to the base syntax. Please note that Material for Mkdocs extends that syntax.

    Additionally, ACCESS-Hive is a portal for documentation hosted elsewhere. The documentation you want to add needs to be available from an existing website. We realise people might have standalone files or other information to share, please see our Standalone documentation page for ways to easily upload your documentation to a site.

    There are two main ways to contribute to the site:

    • you can modify an existing page directly from GitHub without any knowledge of Git. This is a simple way suitable to light modifications.
    • you can work on your local computer and use Git to manage your modifications. This is recommended for more involved modifications. It is the easiest way to modify the categories and menu structure used to navigate the site.
    1. Image by pch.vector on Freepik\u00a0\u21a9

    "},{"location":"contribute/change-the-navigation/","title":"Change the navigation","text":""},{"location":"contribute/change-the-navigation/#structure-of-the-repository","title":"Structure of the repository","text":"

    The important elements of the repository to know about before modifying the navigation are:

    • docs/ folder: this folder contains all the documentation pages. There is an index.md file for the About page, one folder per tab on the site, an assets/ folder to store images used in the documentation and some customisation folders such as css/ or font/.
    • mkdocs.yml: it is a YAML formatted file, hence the .yml extension. The site navigation is defined in this file as well as options for the styling of the site.
    YAML

    YAML is a popular choice for configuration files, as it is a simple way of encoding data structures in a text file. See this short tutorial.

    "},{"location":"contribute/change-the-navigation/#a-simple-example","title":"A simple example","text":"

    The easiest way to explain how the navigation is defined is to look at an example. Let's say mkdocs.yml contains:

    nav:\n- Welcome: index.md\n- ACCESS-NRI: ACCESS-NRI/ACCESS-NRI.md\n- Community: - Generate Bathymetry: community/bathymetry.md\n- How to contribute: - How to contribute: help/how_to_contribute.md\n- Setup: help/contribution_setup.md\n- Modify the documentation: help/modify_documentation.md\n- Change the navigation: help/change_navigation.md\n

    The top level category names define the tabs in the header bar. So here we have the tabs: \"Welcome\", \"ACCESS-NRI\", \"Community\" and \"How to contribute\". It is also the name of the top section under each tab.

    The second level of categories indicate the name of each page under that section. So the \"ACCESS-NRI\" tab has the text directly under the section \"ACCESS-NRI\". The \"Community\" tab has a section called \"Community\" that contains one page: \"Generate Bathymetry\". Finally, the \"How to contribute\" tab has 1 section \"How to contribute\" with 4 pages.

    The filenames indicate the path to the file relative to the docs/ folder containing the text for each page. It is recommended to use the title of each file (i.e. the heading level 1) or an abbreviation of it as the name of the page and the filename.

    "},{"location":"contribute/change-the-navigation/#add-sections-to-a-tab","title":"Add sections to a tab","text":"

    It is possible to define several sections per tab by using more levels of indentation. For example, to add a \"My example\" section to the \"How to contribute\" tab:

    nav:\n- Welcome: index.md\n- ACCESS-NRI: ACCESS-NRI/ACCESS-NRI.md\n- Community: - Generate Bathymetry: community/bathymetry.md\n- How to contribute: - How to contribute: help/how_to_contribute.md\n- Setup: help/contribution_setup.md\n- Modify the documentation: help/modify_documentation.md\n- Change the navigation: help/change_navigation.md\n- My example:\n- Beautiful example: help/beautiful_example.md\n

    will create this navigation:

    "},{"location":"contribute/edit-locally/","title":"Edit locally on your computer","text":"

    If you want to submit a substantial contribution to ACCESS-Hive, it might be easier to do so from your own computer. Especially, it is a lot easier to proceed locally if you need to modify several files or want to modify the navigation of the site.

    You can avoid creating your own fork for the repository by first becoming a member of ACCESS-Hive organisation. To become a member, please reply to this issue and ask to be invited to join the organisation.

    To work locally, you will need git and a text editor installed on your computer.

    "},{"location":"contribute/edit-locally/#open-an-issue","title":"Open an issue","text":"

    For all additions or modifications to the ACCESS-Hive site, it is recommended to start by opening an Issue in the ACCESS-Hive GitHub repository. Consider assigning the Issue to yourself in the right-hand side panel if you intend on working on the issue and you are a member of the ACCESS-Hive organisation.

    "},{"location":"contribute/edit-locally/#obtaining-the-source-files","title":"Obtaining the source files","text":"

    Everything is stored within the ACCESS-Hive repository on GitHub and you simply need to clone this repository to your local machine:

    git clone git@github.com:ACCESS-Hive/access-hive.github.io.git\n
    "},{"location":"contribute/edit-locally/#edit-to-access-hive","title":"Edit to ACCESS-Hive","text":"

    Once you have installed all you need, you will need to follow the usual series of steps when contributing to Open Source developments:

    • open an Issue
    • clone the repository locally
    • start a branch to work on, linked to the Issue
    • commit your modifications to that branch and push to GitHub
    • open a pull request between the main branch and your branch, follow the instructions from the Pull request template.
    • ask for reviews by tagging the ACCESS-Hive/reviewers team and reply to requests for changes

    If you don't know how to do these steps, please refer to our Git and GitHub training.

    What page to edit

    If you have problems finding the page you need to edit, the easiest way is to head to the ACCESS-Hive site. If you click on the pen icon at the top right of each page title, you will open a GitHub page showing you the path to the file you want to edit.

    Headers and table of content

    The level 1 headers are reserved for the title of the page and are ignored from the pages' table of contents. Only use level 2 headers and higher to organise pages.

    "},{"location":"contribute/edit-locally/#add-a-new-event","title":"Add a new event","text":"

    The process to add a new event is a bit different from other updates on the site. Since you need to create a new file to contain the information about the event you are adding, it is recommended to work locally. You need to create a new Markdown file (identified by the .md extension) as described on this page. To record and submit your modification to the site, make sure you follow all the steps as explained in the Open Source process in the previous section.

    "},{"location":"contribute/edit-locally/#preview-of-the-documentation","title":"Preview of the documentation","text":""},{"location":"contribute/edit-locally/#preview-from-a-pull-request","title":"Preview from a Pull Request","text":"

    When a pull request is created or updated, GitHub will automatically build a preview of the documentation that includes the proposed changes.

    In the pull request, you will see the link to the preview appear in this fashion:

    Build delay

    It can take a while for the preview to build, even after the CI check is indicated as finished. Please wait for the comment with the link to appear and allow for some time after that for the preview to be properly deployed.

    If you open the preview and it looks completely broken or if it hasn't updated from additional modifications in the pull request, it probably means the site hasn't finished building yet. If you wait a couple of minutes and refresh the page, it should fix itself.

    "},{"location":"contribute/edit-locally/#local-preview-if-editing-on-your-own-computer","title":"Local preview (if editing on your own computer)","text":"

    MkDocs includes a live preview server, so you can preview your changes as you write your documentation. The server will automatically rebuild the site upon saving.

    "},{"location":"contribute/edit-locally/#software-installation","title":"Software installation","text":"

    To build the site locally, you need to install [Material for Mkdocs][MatforMkdocs] and other plugins. You can find a full list in the requirements.txt file in the root of the ACCESS-Hive repository. Please use pip for the installation as some of packages are not updated or not available on conda:

    pip install -r requirements.txt\n
    "},{"location":"contribute/edit-locally/#start-the-server","title":"Start the server","text":"

    To start the server, open a terminal and navigate to your ACCESS-Hive local repository. Now type:

    mkdocs serve\n

    Your documentation will be built on http://127.0.0.1:8000. Open this URL in your browser to see a preview of the documentation. The URL is given in the terminal when running the mkdocs serve command. Make sure you keep the command running so as to see live updates on saving your modifications.

    "},{"location":"contribute/edit-on-github/","title":"Edit directly on GitHub","text":"

    This way to edit the site allows people with no knowledge of Git to contribute to ACCESS-Hive but is only suitable for light modifications of existing pages.

    • Go to the page you want to modify on the ACCESS-Hive documentation site. At the right of the title, you will see a pen icon . When you click on this icon, your browser will open the file in GitHub allowing you to edit the file.
    • Enter your modification in the main pane. All the files are written in Markdown.
    Headers and table of content

    The level 1 headers are reserved for the title of the page and are ignored from the pages' table of contents. Only use level 2 headers and higher to organise pages.

    • Then add a commit message in the Commit changes box.
    • Commit and open a pull request
    Pull request is required

    The main branch of the repository is protected and nobody can write to it directly. You will need to choose either to create a new branch (for ACCESS-Hive organisation members only) or to create a fork on your GitHub personal account (for non-members of ACCESS-Hive organisation) and then open a pull request in all cases.

    When creating the pull request, make sure to follow the instructions given to you in the pull request template. The description can be edited at any time. You can fill in the check list after creating the pull request. The pull request will automatically build a preview of the documentation with your proposed changes.

    • Ask for a review by tagging the @ACCESS-Hive/reviewers team in a comment.

    • Reply to the review. You will be notified by email of any subsequent comment, request or action from the reviewer on this pull request. Please make sure you take any action required by the reviewer or your modification will not be accepted into the ACCESS-Hive site.

    "},{"location":"contribute/edit-on-github/#further-edits","title":"Further edits","text":"

    During the review process, you might be requested to edit your proposed changes. For this, you will need to navigate to the branch created by GitHub.

    • At the top of the Pull request window on GitHub, you should see a link to your branch, circled in red on the image:
    • Once you click on this link, navigate to and open the file you need to modify, then click on the pen icon in the toolbar on the right, circled in red on the image:
    • Then commit your changes once again to the same branch. This will update the pull request and the preview of the site.

    • You need to let the reviewer know once you are confident you have responded to all their concerns, so they can review again. For this, locate the \"Reviewers\" pane in the right-hand side menu list on GitHub and click the icon circled in red in the image:

    "},{"location":"contribute/reviewers/","title":"Reviewing for ACCESS-Hive","text":"

    Any member of the ACCESS-Hive Github Community (to join) can join the reviewers team. Please ask one of the maintainers to invite you to join the reviewers team.

    "},{"location":"contribute/reviewers/#reviewer-guidelines","title":"Reviewer Guidelines","text":"

    Firstly, thank you so much for helping to review links submitted to the ACCESS-Hive, we\u2019re delighted to have your help. This document is designed to outline our editorial guidelines and help you understand our requirements for accepting a pull request (PR).

    "},{"location":"contribute/reviewers/#guiding-principles","title":"Guiding principles","text":"

    If the submitting authors have followed the contribution guidelines then the review should be rapid. An important requirement is the ACCESS-Hive is a portal to documentation, it does not host the documentation itself.

    For those PRs that don\u2019t quite meet the requirements, please try to give clear feedback on what needs fixing. Our goal is to maintain a high quality platform for exchanging links to relevant documentation and you, as a reviewer, have a key role to play.

    A review involves checking submissions against a checklist of essential features and details described at the top of each PR. This should be objective, not subjective; it should be based on the materials in the submission as perceived without distortion by personal feelings, prejudices, or interpretations.

    Some continuous tests such as hyperlink references checks and preview deployments are automatically triggered by submitting a PR.

    Reviewers should:

    1. Ensure that the tests are passing without errors.
    2. Do a visual check using the preview.
    3. Do a Github pull request review. See GitHub's extensive documentation
    4. Once you have approved the PR. Tag the editors team @ACCESS-Hive/editors in the discussion.

    We encourage reviewers to provide feedback from within the PR discussion.

    You can include in your review links to any new issues that you the reviewer believe to be impeding the acceptance of the pull request.

    "},{"location":"contribute/standalone-documentation/","title":"Standalone documentation","text":"

    You may have very valuable resources to share which are not currently available through any website, it is what we call \"standalone documentation\" in the current context.

    To contribute these resources to ACCESS-Hive, you will first need to make them available on the internet. Below are some ideas on how to do that:

    1. Check if your organisation has a documentation or information site suitable for your resource.
    2. Check if ACCESS-NRI has a documentation site suitable for your resource. Note in this case, you will be asked about ownership and license associated with your resource.
    3. Publish your documentation via Zenodo. This will clarify the licensing and reuse conditions.

    If none of the previous options seem suitable to you, please consult the forum.

    "},{"location":"model_evaluation/","title":"Model Evaluation and Diagnostics (MED)","text":"

    Model evaluation is about measuring how fit for purpose a particular model is.

    If you are new to model evaluation and diagnostics, we recommend you read our Getting Started with MED page.

    Here, we provide catalogs and pointers to observational data as well as model data that can be used for evaluation. We provide tools to process such data into a comparable format and a gallery of recipes to evaluate the formatted data.

    Observational Data Catalog Model Data Catalog Data Format Processing Evaluation Recipe Gallery

    Our vision: PLACEHOLDER FOR OUTCOME OF STAFF RETREAT

    "},{"location":"model_evaluation/model_evaluation_data_processing/","title":"Data Processing Tools","text":"

    On this page, we will provide you a list of currated data processing tools.

    While we are still ramping up this service, please take a look at the gallery of community tools on Community Resources -> Community Data Processing Tools {{ community }}.

    "},{"location":"model_evaluation/model_evaluation_live_diagnostics/","title":"Live Diagnostics on Gadi","text":"

    Here, we will describe the tools we provide for \"live-diagnostics\" of the ACCESS configurations. These tools allow you to check model output/progress/failures at specified time steps while your model is running.

    "},{"location":"model_evaluation/model_evaluation_observational_catalogs/","title":"Observational Data Catalog","text":"

    As of June 2023, we provide the following observational data catalogs on Gadi@NCI through the project kj13:

    /g/data/kj13/datasets\n\u251c\u2500\u2500 cmip6\n\u2502   \u2514\u2500\u2500 *.nc\n\u251c\u2500\u2500 esmvaltool\n\u2502   \u2514\u2500\u2500 obsdata-v2/\n\u2502       \u251c\u2500\u2500 Tier1\n\u2502       \u2502   \u251c\u2500\u2500 AIRS\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 *.nc\n\u2502       \u2502   \u2514\u2500\u2500  ...\n\u2502       \u251c\u2500\u2500 Tier2\n\u2502       \u2502   \u251c\u2500\u2500 BerkeleyEarth\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 *.nc\n\u2502       \u2502   \u2514\u2500\u2500  ...\n\u2502       \u2514\u2500\u2500 Tier3\n\u2502           \u251c\u2500\u2500 ERA5\n\u2502           \u2502   \u2514\u2500\u2500 OBS6_ERA5_reanaly_v1_Amon_pr_197901-202012.nc\n\u2502           \u2514\u2500\u2500  ...\n\u251c\u2500\u2500 ilamb\n\u2502   \u2514\u2500\u2500 DATA\n\u2502       \u251c\u2500\u2500 albedo\n\u2502       \u2502   \u251c\u2500\u2500 CERESed4.1\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 albedo.nc\n\u2502       \u2502   \u2514\u2500\u2500 ...\n\u2502       \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 iomb\n\u2502   \u2514\u2500\u2500 DATA\n\u2502       \u251c\u2500\u2500 silicate\n\u2502       \u2502   \u251c\u2500\u2500 GLODAP\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 *albedo*.nc\n\u2502       \u2502   \u2514\u2500\u2500 ...\n\u2502       \u2514\u2500\u2500 ...\n

    If you want to use but do not have access to Gadi@NCI yet, please follow our instructions on how to Get Started with Model Evaluation.

    Dataset name Downloaded /g/data/kj13/datasets/ AIRS 2020-12-04 Tier1/AIRS AIRS-2-0 2020-12-04 Tier1/AIRS-2-0 AIRS-2-1 2021-07-14 Tier1/AIRS-2-1 ATSR 2020-12-04 Tier1/ATSR CALIOP 2020-12-04 Tier1/CALIOP CERES-EBAF 2020-12-04 Tier1/CERES-EBAF CFSR 2020-12-04 Tier1/CFSR CloudSat 2020-12-04 Tier1/CloudSat ESACCI-GHG 2020-12-04 Tier1/ESACCI-GHG GPCP-SG 2020-12-04 Tier1/GPCP-SG ISCCP 2021-04-08 Tier1/ISCCP JRA-55 2022-11-15 Tier1/JRA-55 MODIS 2020-12-04 Tier1/MODIS SSMI 2020-12-04 Tier1/SSMI SSMI-MERIS 2021-10-19 Tier1/SSMI-MERIS TRMM-L3 2020-12-04 Tier1/TRMM-L3 ghgcci 2020-12-04 Tier1/ESACCI-GHG BerkeleyEarth 2020-12-04 Tier2/BerkeleyEarth CALIPSO-GOCCP 2023-01-28 Tier2/CALIPSO-GOCCP CERES-EBAF 2022-08-12 Tier2/CERES-EBAF CRU 2021-07-14 Tier2/CRU CT2019 2020-12-04 Tier2/CT2019 CowtanWay 2020-12-04 Tier2/CowtanWay Duveiller2018 2020-12-04 Tier2/Duveiller2018 E-OBS 2020-12-04 Tier2/E-OBS ESACCI-AEROSOL 2023-01-28 Tier2/ESACCI-AEROSOL ESACCI-CLOUD 2023-01-28 Tier2/ESACCI-CLOUD ESACCI-FIRE 2023-01-28 Tier2/ESACCI-FIRE ESACCI-LANDCOVER 2023-01-28 Tier2/ESACCI-LANDCOVER ESACCI-LST 2022-01-26 Tier2/ESACCI-LST ESACCI-OC 2022-01-26 Tier2/ESACCI-OC ESACCI-OZONE 2023-01-28 Tier2/ESACCI-OZONE ESACCI-SEA-SURFACE-SALINITY 2022-01-26 Tier2/ESACCI-SEA-SURFACE-SALINITY ESACCI-SOILMOISTURE 2023-01-28 Tier2/ESACCI-SOILMOISTURE ESACCI-SST 2023-01-28 Tier2/ESACCI-SST ESRL 2021-04-08 Tier2/ESRL Eppley-VGPM-MODIS 2020-12-05 Tier2/Eppley-VGPM-MODIS GCP2018 2021-11-08 Tier2/GCP2018 GCP2020 2021-11-08 Tier2/GCP2020 GHCN 2023-01-28 Tier2/GHCN GHCN-CAMS 2020-12-05 Tier2/GHCN-CAMS GISTEMP 2020-12-05 Tier2/GISTEMP GLODAP 2022-04-29 Tier2/GLODAP GPCC 2021-04-08 Tier2/GPCC HALOE 2023-01-28 Tier2/HALOE HadCRUT3 2023-01-28 Tier2/HadCRUT3 HadCRUT4 2023-01-28 Tier2/HadCRUT4 HadCRUT4-clim 2020-12-04 Tier2/HadCRUT4-clim HadCRUT5 2022-04-29 Tier2/HadCRUT5 HadISST 2023-01-28 Tier2/HadISST ISCCP-FH 2023-01-28 Tier2/ISCCP-FH JRA-25 2022-11-24 Tier2/JRA-25 Kadow2020 2022-08-12 Tier2/Kadow2020 Landschuetzer2016 2020-12-05 Tier2/Landschuetzer2016 Landschuetzer2020 2022-11-15 Tier2/Landschuetzer2020 MOBO-DIC_MPIM 2022-11-15 Tier2/MOBO-DIC_MPIM NCEP 2023-01-28 Tier2/NCEP NCEP-DOE-R2 2023-01-28 Tier2/NCEP-DOE-R2 NCEP-NCAR-R1 2023-01-28 Tier2/NCEP-NCAR-R1 NOAA-CIRES-20CR 2023-01-28 Tier2/NOAA-CIRES-20CR NOAAGlobalTemp 2022-08-12 Tier2/NOAAGlobalTemp OSI-450-nh 2020-12-05 Tier2/OSI-450-nh OSI-450-sh 2020-12-05 Tier2/OSI-450-sh OceanSODA-ETHZ 2022-11-17 Tier2/OceanSODA-ETHZ PATMOS-x 2023-01-28 Tier2/PATMOS-x PERSIANN-CDR 2020-12-05 Tier2/PERSIANN-CDR PHC 2020-12-05 Tier2/PHC PIOMAS 2020-12-05 Tier2/PIOMAS REGEN 2020-12-05 Tier2/REGEN Scripps-CO2-KUM 2022-04-29 Tier2/Scripps-CO2-KUM TCOM-CH4 2023-01-28 Tier2/TCOM-CH4 TCOM-N2O 2023-01-28 Tier2/TCOM-N2O WFDE5 2021-07-14 Tier2/WFDE5 WOA 2021-10-19 Tier2/WOA AURA-TES 2023-06-14 Tier3/AURA-TES CALIPSO-ICECLOUD 2023-06-14 Tier3/CALIPSO-ICECLOUD CDS-XCH4 2023-06-14 Tier3/DS-XCH4 CDS-XCO2 2023-06-14 Tier3/CDS-XCO2 ERA-Interim 2022-11-24 Tier3/ERA-Interim ERA-Interim-Land 2021-09-14 Tier3/ERA-Interim-Land ERA5 2021-02-12 Tier3/ERA5 ERA5-Land 2023-06-14 Tier3/ERA5-Land FLUXCOM 2022-01-26 Tier3/FLUXCOM HWSD 2023-06-14 Tier3/HWSD LandFlux-EVAL 2023-06-14 Tier3/LandFlux-EVAL MAC-LWP 2023-06-14 Tier3/MAC-LWP NIWA-BS 2023-06-16 Tier3/NIWA-BS"},{"location":"model_evaluation/model_evaluation_observational_catalogs/#todo-tier2","title":"ToDo Tier2","text":"Dataset name Downloaded /g/data/kj13/datasets/ ESDC Tier2/"},{"location":"model_evaluation/model_evaluation_observational_catalogs/#todo-tier3","title":"ToDo Tier3","text":"Dataset name Downloaded /g/data/kj13/datasets/ APHRO-MA Tier3/ CDS-SATELLITE-ALBEDO Tier3/ CDS-SATELLITE-LAI-FAPAR Tier3/ CDS-SATELLITE-SOIL-MOISTURE Tier3/ CDS-UERRA Tier3/ CERES-SYN1deg Tier3/ CLARA-AVHRR Tier3/ CLOUDSAT-L2 Tier3/ ESACCI-WATERVAPOUR Tier3/ GRACE Tier3/ JMA-TRANSCOM Tier3/ LAI3g Tier3/ MERRA2 Tier3/ MLS-AURA Tier3/ MTE Tier3/ NDP Tier3/ NIWA-BS Tier3/ NSIDC-0116-nh Tier3/ NSIDC-0116-sh Tier3/ UWisc Tier3/"},{"location":"model_evaluation/model_evaluation_recipe_gallery/","title":"Model Evaluation and Diagnostics (MED) Recipe Gallery","text":"

    While we are still building this gallery, please have a look at the Community MED Recipes listed at Community Resources -> Community Model Evaluation Recipes {{ community }}

    Here, we plan to provide you with an embedded link to our actively maintained Model Evaluation and Dianostics (MED) Recipe Gallery, hosted at medportal.herokuapp.com. For now, we provide a placeholder image with link and pointers to useful Model Evaluation and Dianostics (MED) resources.

    "},{"location":"model_evaluation/model_evaluation_recipe_gallery/#link-to-our-med-recipe-gallery-supported","title":"Link to our MED Recipe Gallery {{ supported }}","text":""},{"location":"model_evaluation/model_evaluation_getting_started/","title":"Getting Started with Model Evaluation at NCI","text":"

    Welcome to Model Evaluation and Diagnostics!

    Here, we provide you the important information to give you access to the large data that we curate at NCI's storage and show you how you can use it to figure out how fit for purpose specific models are, in particular when you compare them to osbervational data:

    1) Getting Access to NCI and relevant NCI projects 2) Setting up environments at gadi@NCI to load and evaluate observational and model data

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/","title":"Getting Started: Computing Access (Gadi@NCI)","text":"

    Here, we provide you the important information to give you access to the large data that we curate at NCI's storage:

    1) Get an NCI Account 2) Join relevant NCI projects 3) Logging in to Gadi@NCI 4) Computing on Gadi

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#1-nci-account","title":"1) NCI Account","text":"

    To be able to work with our data, you need an NCI account.

    If you don't have one yet, signup here.

    Note: You will need an institutional email address with an organisation that allows access to NCI (e.g., CSIRO, a university, etc.).

    Once you have signed up, you will be allocated a username. We will refer to this username (e.g. kf1234) as $USER.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#2-join-relevant-nci-projects","title":"2) Join relevant NCI projects","text":"

    There is a plethora of NCI projects that may or may not be relevant for you.

    We recommend you have a chat with your supervisor to identify the relevant projects, but in any case suggest to join xp65 for MED code as well as kj13 for MED data.

    To get this conversation started, we list some possibly relevant projects below:

    Project Description with link, * indicated compute resource ACCESS-NRI projects tm70 ACCESS-NRI Working Project * iq82 ACCESS-NRI MED Compute * kj13 ACCESS-NRI MED Data Dev ct11 ACCESS-NRI Replicated Datasets xp65 ACCESS-NRI Analysis Environments ACCESS projects access ACCESS software sharing p66 ACCESS - AOGCM / suppport development of the ACCESS modelling system * p73 ACCESS Model Output Archive (AOGCM) Data projects hh5 Climate-LIEF Data Storage ub7 Seasonal Prediction ACCESS-S1 Hindcast ux62 Seasonal Prediction ACCESS-S2 Hindcast cb20 ESGF CMIP3 Replication Data al33 ESGF CMIP5 Replication Data rr3 ESGF CMIP5 Australian Data Publication oi10 ESGF CMIP6 Replication Data fs38 ESGF CMIP6 Australian Data Publication rt52 ERA5 Replicated Data: Single and pressure-levels data uc16 ERA5 Replicated Datasets on Potential Temperature & Vorticity Levels zz93 ERA5-Land Replicated Data zv2 Australian Gridded Climate Data (AGCD) Collection qv56 Reference Datasets for Climate Model Analysis/Forcing cj50 COSIMA Model Output Collection Other projects ik11 COSIMA shared working space v45 Ocean Extremes * ga6 Modelling the formation of sedimentary basins and continental margins * m18 Evolution and dynamics of the Australian lithosphere * q97 Earth dynamics and resources over the last billion years * qu79 Collaborative REAnalysis Technical Environment Intercomparison Project (CREATE-IP)

    To join a project or find more projects, please use this NCI website.

    The first project that you join will become your default login project, e.g. xp65. We will refer to it as $PROJECT and we show you how to change it below.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#3-logging-in-to-gadinci","title":"3) Logging in to Gadi@NCI","text":"

    If you have never logged onto Gadi before, we recommend to take a look at NCI's Welcome to Gadi website. It provides all the important commands and information for logging properly onto Gadi, like the following: \"To run jobs on Gadi, you need to first log in to the system. Users on Mac/Linux can use the built-in terminal. For Windows users, we recommend using MobaXterm as the local terminal. Logging in to Gadi happens through a Gadi login node.\"

    When you login, via the command

    ssh -Y $USER@gadi.nci.org.au\n
    you will enter your $HOME directory with your default $PROJECT and your default SHELL. Both are saved at $HOME/.config/gadi-login.conf and you can print them via
    cat $HOME/.config/gadi-login.conf\n

    The -Y option is needed to run graphical tools by enabling the forwarding of trusted X protocol mesgs between X-Server on local system and X programs on Gadi. You need to enable X Windowing system on your local system before running ssh. This can be done by running X-Server like XQuartz (Mac), MobaXterm (MS Windows), startx or similar (Linux).

    Again, for more useful information we recommend to check out NCI's Welcome to Gadi website.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#4-computing-on-gadi","title":"4) Computing on Gadi","text":""},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#gadi-resources","title":"Gadi Resources","text":"

    Coupled climate models like ACCESS-CM involve, among other things, calculation of complex mathematical equations that explain the physics of the atmosphere and oceans. Performed at hundreds of millions of points around the Earth, these calculations require vast computing power to complete them in a reasonable amount of time, thus relying on the power of high-performance computing (HPC) like Gadi. The Gadi supercomputer can handle more than 10 million billion (10 quadrillion) calculations per second and is connected to 100,000 Terabytes of high-performance research data storage.

    An overview of Gadi resources such as compute, storage and PBS jobs are described below.

    Useful NCI commands to check your available compute resources are:

    Command Purpose logout or Ctrl+D To exit a session hostname Displays login node details module list Modules currently loaded module avail Available modules nci_account -P [proj] Compute allocation for [proj] nqstat -P [proj] Jobs running/queued in [proj] lquota Storage allocation and usage for all your projects"},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#compute-hours","title":"Compute Hours","text":"

    Compute allocations are granted to projects instead of directly to users and, hence, you need to be a member of a project in order to use its compute allocation. To run jobs on Gadi, you need to have sufficient allocated compute hours available, where the job cost depends on the resources reserved for the job and the amount of walltime it uses.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#storage","title":"Storage","text":"

    Each user has a project-independent $HOME directory, which has a storage limit of 10 GiB. All data on /home is backed up.

    Through project membership, the user gets access to the storage space within the project folders /scratch and /g/data filesystems for that particular project.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#pbs-jobs","title":"PBS Jobs","text":"

    To run compute tasks such as an ACCESS-CM suite on Gadi, users need to submit them as jobs to queues. Within a job submission, you can specify the queue, duration and computational resources needed for your job. When a job submission is accepted, it is assigned a jobID (shown in the return message) that can then be used to monitor the job\u2019s status.

    On job completion, contents of the job\u2019s standard output/error stream gets copied to a file in the working directory with the respective format: <jobname>.o<jobid> and <jobname>.e<jobid>. Users should check these two log files before proceeding with post-processing of any output from their corresponding job.

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/","title":"Getting Started with Model Evaluation at Gadi@NCI","text":"

    At this stage of Getting Started, we assume that you already have access to Gadi@NCI. If this is not the case, please go to our instructions on how to get access to Gadi@NCI

    Here we describe where you can find, load, and evalulate observational and model data on Gadi.

    Note: You do not automatically have access to all of Gadi's storage at /g/data/, but need to be part of a $PROJECT to see files at /g/data/$PROJECT. Furthermore, if you use Gadi's job submission system PBS (Portable Batch System), you need to add the relevant storage to the #PBS -l storage=gdata/xp65+gdata/kj13 (if you want the job to have access to xp65 and kj13 in this example).

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/#21-access-med-our-currated-conda-environment-for-you-on-gadi","title":"2.1) access-med: Our currated conda environment for you on Gadi","text":"

    To avoid running multiple (different) versions of code on Gadi, we provide you with a conda environment called access-med that we actually curate for you (version 0.1 is from June 2023).

    In order to change to this environment, please execute the following commands after loggin onto Gadi (and as part of your PBS scripts):

    $ module use /g/data/xp65/public/modules\n$ module load conda/access-med-0.1\n

    If you are planning to run your code through JupyterLab on NCI's ARE, you need to use /g/data/xp65/public/modules as Module directories and conda/are as Modules when launching a JupyterLab session.

    You are now able to use the scripts of our currated environment, including python3, intake, jupyter, esmvaltool, or ilamb. The complete list of dependencies can be found in our dedicated GitHub repository.

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/#22-observational-data","title":"2.2) Observational Data","text":"

    We provide an extensive collection of observational data on Gadi@NCI within the /g/data/kj13/datasets directory.

    Please take a look at our Observational Data Catalog for an overview.

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/#23-model-data","title":"2.3) Model Data","text":"

    There are many models and data stored on Gadi, as you can imagine from the plethora of projects in Section 1.2. Downloading this data is hardly practical, so we suggest to work on Gadi instead.

    To avoid endless searches within Gadi's storage, we written a useful 'library' tool, called access-nri-catalog, that allows to search the Model Catalogs easily and is already loaded as part of our access-med conda environment. To find out how you can search for Model Data on Gadi, take a look at our Model Catalog.

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/#25-model-evaluation","title":"2.5) Model Evaluation","text":"

    Now that you have both Observational Data and Model Data at the palm of your hand on Gadi@NCI, there are many ways to evaluate this data.

    As part of our ACCESS-NRI conda environment, we provide several Model Evaluation Tools, like ilamb or esmvaltool.

    Check out Model Evaluation at Gadi to find out how you can use them on Gadi.

    "},{"location":"model_evaluation/model_evaluation_model_catalogs/","title":"ACCESS-NRI intake Model Catalog","text":"

    ACCESS-NRI is hosting a number of calculated models for you through National Computational Infrastructure (NCI) storage.

    We have set up an ACCESS-NRI intake Catalog package that allows you to easily search and load the model data on this storage. The premise of this ACCESS-NRI intake Catalog is to provide a (\"meta\") catalog of intake-esm (\"sub\") catalogs, which each correspond to different \"experiments\".

    Search for a model in the ACCESS-NRI intake Catalog Add your model data to the ACCESS-NRI intake Catalog"},{"location":"model_evaluation/model_evaluation_model_catalogs/model_evaluation_add_models/","title":"Add your model data to the ACCESS-NRI intake Catalog","text":"

    You've just run a new experiment, now you want to create an intake-esm catalog for that experiment?

    Look at this Tutorial {{ supported }} to learn how to add your own models.

    "},{"location":"model_evaluation/model_evaluation_model_catalogs/model_evaluation_search_models/","title":"Search for a model in the ACCESS-NRI intake Catalog","text":"

    To have the huge amount of data from different experiments on the NCI storage at the palm of your hand, we provide a (\"meta\") catalog for you to query via python as part of the intake package with our curated catalog plugin intake.cat.access_nri {{ supported }}.

    To use this catalog, you need access to NCI's Gadi. Check out our Get Started with ACCESS at NCI {{ supported }} guide on how to get access.

    Once logged in to Gadi, you will need to add the access-nri-catalog to your conda environments and start an ARE JupyterLab Session. Check out our ACCESS-NRI Intake Catalog guide {{ supported }} for the specific setup (note that you can only read in data from specific experiments if they are loaded through the Storage keyword).

    Once your JupyterLab session started, you can access the intake catalog to load the data. Take a look at this Tutorial {{ supported }}.

    # Impport packages for searching/loading/plotting\nimport intake\nfrom distributed import Client\nimport matplotlib.pyplot as plt\n\n# The search process is a 2-step one\n# Comparable with searching for a book in a library:\n# 1) You look for the right book/catalog sections\n# 2) You look for the right book/catalog in the these sections\n\n# Load the ACCES-NRI list of catalogs for available experiment data\n# Similar to an overview of library section\naccess_nri_catalog_sections = intake.cat.access_nri\n\n# Perform a search for names, models, variables etc.\nexample_section_search = access_nri_catalog_sections.search(name=\"cmip6_oi10\")\n\n# Once you are sufficiently happy with your search, you can load the \"section\"\ncatalog_sections = access_nri_catalog_sections.search(name=\"025deg_jra55_iaf_omip2_cycle1\").to_source()\n# and start looking for the right catalogs of interest\ncatalogs_of_interest = catalog_sections.search(filename=\"ocean_scalar.*\")\n\n# Call the client that allows use load the data efficiently\nclient = Client(threads_per_worker=1)\nclient.dashboard_link\n\n# Actually load the data\nexperiment_data = catalogs_of_interest.to_dataset_dict(progressbar=False)\n\n# Et voil\u00e0, you have loaded the data and can start plotting\nexperiment_data[\"ocean_scalar_snapshot.1day\"][\"temp_global_ave\"].plot(label=\"daily\")\nexperiment_data[\"ocean_scalar.1mon\"][\"temp_global_ave\"].plot(label=\"monthly\")\n_ = plt.legend()\n
    "},{"location":"model_evaluation/model_evaluation_on_gadi/","title":"Model Evaluation on Gadi","text":"

    To kick-start your model evaluation efforts, we provide the following tools as part of our access-med conda environment (and tutorials for how to use them on Gadi@NCI): - ilamb, a tool for International Land (and Ocean) Model Benchmarking. - esmvaltool, an Earth System Model Evaluation Tool

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool/","title":"Tutorial for using esmvaltool on Gadi@NCI","text":""},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/","title":"Tutorial for using ilamb on Gadi@NCI","text":"

    This tutorial explains how you can setup and run International Land Model Benchmarking (ILAMB) and International Ocean Model Benchmarking (IOMB) tests on NCI infrastracture. Both projects are maintained as python code under the package name ilamb.

    The Tutorial contains:

    1. Background
    2. Installation guide
    3. Setup details
    4. Run ilamb
    5. Run liamb on NCI
    6. Fix your setup with ilamb-doctor
    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#1-background-international-land-model-benchmarking-ilamb-and-international-ocean-model-benchmarking-iomb","title":"1. Background: International Land Model Benchmarking (ILAMB) and International Ocean Model Benchmarking (IOMB)","text":"

    As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to improve the performance of land models and, in parallel, improve the design of new measurement campaigns to reduce uncertainties associated with key land surface processes.

    If you have used (and installed) ilamb on NCI and know the basic principle of ilamb, you can start from Section 5) Guide for using ilamb on NCI.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#2-installing-ilamb","title":"2. Installing ilamb","text":"

    For NCI users, ACCESS-NRI is providing a conda environment called ilamb_dev through the xp65 project, with ilamb installed. You can load and activate it via:

    >>> module use /g/data/xp65/public/modules\n>>> module load conda/ilamb_dev\n>>> conda activate ilamb_dev\n

    We will soon add ilamb also to the ACCESS-NRI MED conda environment, access-med under projectxp65.

    If you want to install ilamb yourself, please follow the official installation instructions at https://www.ilamb.org/doc/install.html.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#3-configuring-ilamb","title":"3. Configuring ilamb","text":"

    Before you can run ilamb, you need to configure a few things:

    3.1. Organise the ILAMB_ROOT path 3.2. Set up a config file 3.3. Set up a modelroute and regions files (Optional, if you want to run only a subset of models and/or specific regions of the world) 3.4. Download a shapefiles files locally (Optional online, necessary offline e.g. on NCI compute nodes)

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#31-organise-the-ilamb_root-path","title":"3.1 Organise the ILAMB_ROOT path","text":"

    ilamb demands files to be organised in a specific directory structure of DATA and MODELS.

    If you do not have your own files yet, you can download and use example files provided as part of the of ilamb's First Steps Tutorial

    The following tree represents the organization of the contents of this extracted sample data (Note: We renamed the main directory name):

    $ILAMB_ROOT (renamed from \"ILAMB_sample\")\n\u251c\u2500\u2500 sample.cfg (see [Section 3.2](#32-set-up-a-config-file))\n\u251c\u2500\u2500 modelroute.txt (optional, see [Section 3.3](#33-set-up-modelroute-and-regions-files))\n\u251c\u2500\u2500 regions.txt (optional, see [Section 3.3](#33-set-up-modelroute-and-regions-files))\n\u251c\u2500\u2500 DATA\n\u2502   \u251c\u2500\u2500 albedo\n\u2502   \u2502   \u2514\u2500\u2500 CERES\n\u2502   \u2502       \u2514\u2500\u2500 albedo_0.5x0.5.nc\n\u2502   \u2514\u2500\u2500 rsus\n\u2502       \u2514\u2500\u2500 CERES\n\u2502           \u2514\u2500\u2500 rsus_0.5x0.5.nc\n\u2514\u2500\u2500 MODELS\n    \u2514\u2500\u2500 CLM40cn\n        \u251c\u2500\u2500 rsds\n        \u2502   \u2514\u2500\u2500 rsds_Amon_CLM40cn_historical_r1i1p1_185001-201012.nc\n        \u2514\u2500\u2500 rsus\n            \u2514\u2500\u2500 rsus_Amon_CLM40cn_historical_r1i1p1_185001-201012.nc\n

    There are two main branches in this directory. The first is the DATA directory\u2013this is where we keep the observational datasets each in a subdirectory bearing the name of the variable. While not strictly necesary to follow this form, it is a convenient convention. The second branch is the MODEL directory in which we see a single model result from CLM.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#311-add-files-to-data","title":"3.1.1 Add files to DATA","text":"

    There is a lot of DATA available to add. Take a look at https://www.ilamb.org/ILAMB-Data/ and https://www.ilamb.org/IOMB-Data/ for extensive lists for ILAMB-Data (land modelling) and IOMB-Data (ocean modelling).

    ilamb has a commandline prompt to add new DATA in a substructure. To fetch all available DATA from the website, you can simply go to your $ILAMB_ROOT and type

    >>> ilamb-fetch\n

    The command will compare the above website with your current DATA and make suggestions for downloads:

    Comparing remote location:\n\n      https://www.ilamb.org/ILAMB-Data/\n\nTo local location:\n\n      ./\n\nI found the following files which are missing, out of date, or corrupt:\n\n      .//DATA/twsa/GRACE/twsa_0.5x0.5.nc\n      .//DATA/rlus/CERES/rlus_0.5x0.5.nc\n      ... (we have abbreviated the extensive list here)\n\nDownload replacements? [y/n]\n

    You can use ilamb-fetch -h to see how you can adjust the remote and local locations. If you want to download IOMB-Data, you can for example use

    ilamb-fetch --remote_root https://www.ilamb.org/IOMB-Data/\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#312-add-files-to-model","title":"3.1.2 Add files to MODEL","text":"

    If you want to add your own MODEL to ilamb, you can do so by following this description.

    For NCI users, our ilamb_dev conda enrivonment already provides all observational datasets from the ilamb official web and the ACCESS-ESM1_5 model result for user at ILAMB_ROOT. Stay tune for more observational and model data or tell us which ones we should definitely add.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#32-set-up-a-config-file","title":"3.2 Set up a config file","text":"

    Now that we have the data, we need to setup a config file which the ilamb package will use to initiate a benchmark study.

    ilamb provides default config files here.

    Below we explain both which variables you can define, but start by showing you the minimum setup from the tutorial's. sample.cfg file:

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#minimum-configure-file-with-a-direct-and-a-derived-variable","title":"Minimum configure file with a direct and a derived variable","text":"
    # This configure file specifies the variables\n\n[h1: Radiation and Energy Cycle]\n\n[h2: Surface Upward SW Radiation]\nvariable = \"rsus\"\n\n[CERES]\nsource   = \"DATA/rsus/CERES/rsus_0.5x0.5.nc\"\n\n[h2: Albedo]\nvariable = \"albedo\"\nderived  = \"rsus/rsds\"\n\n[CERES]\nsource   = \"DATA/albedo/CERES/albedo_0.5x0.5.nc\"\n

    In brief: This file allows you to create different header descriptions of the experiments (h1: top level for grouping of variables, h2: sub-level for each variable), but most importantly the variables that we will look into and their source. In the eaxmple, rsus (Surface Upward Shortwave Radiation) and albedo are the used variables. The latter is actually derived from two variables by dividing the Surface Upward Shortwave Radiation by the Surface Downward Shortwave Radiation, derived = rsus/rsds. Finally, sources are defined as source with a text-font header without h1 or h2.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#changing-configure-file-variables","title":"Changing configure file variables","text":"

    This is the list of all the variables you can modify in config file:

    source              = None\n#Full path to the observational dataset\n\ncmap                = \"jet\"\n#The colormap to use in rendering plots (default is 'jet')\n\nvariable            = None\n#Name of the variable to extract from the source dataset\n\nalternate_vars      = None\n#Other accepted variable names when extracting from models\n\nderived             = None\n#An algebraic expression which captures how the confrontation variable may be generated\n\nland                = False\n#Enable to force the masking of areas with no land (default is False)\n\nbgcolor             = \"#EDEDED\"\n#Background color\n\ntable_unit          = None\n#The unit to use when displaying output in tables on the HTML page\n\nplot_unit           = None\n#The unit to use when displaying output on plots on the HTML page\n\nspace_mean          = True\n#Disable to compute sums of the variable over space instead of mean values\n\nrelationships       = None\n#A list of confrontations with whose data we use to study relationships, the syntax is \"h2 tag/observational dataset\". You will see the relationship part in the output if you specify some relationship.\n\nctype               = None\n#Choose a specific Confrontion class. \n\nregions             = None\n#Specify the regions of confrontation\n\nskip_rmse           = False\n#akip rmse in program\n\nskip_iav            = True\n#Ship iav in program\n\nmass_weighting      = False\n#if switch to true, using an average data in a period to normalize\n\nweight              = 1    \n# if a dataset has no weight specified, it is implicitly 1\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#33-set-up-modelroute-and-regions-files","title":"3.3 Set up modelroute and regions files","text":"

    If you plan to run only a specific subset of models, you can already define them in a modelroute.txt file. It could look like our specific example for running different versions (1, 2, and 3) of the ACCESS-ESM 1.5 suite:

    # Model Name                    , PATH/TO/MODELS  , EXTRA COMMANDS\nACCESS_ESM1-5-r1i1p1f1          , MODELS/r1i1p1f1 , CMIP6\nACCESS_ESM1-5-r2i1p1f1          , MODELS/r2i1p1f1 , CMIP6\nACCESS_ESM1-5-r3i1p1f1          , MODELS/r3i1p1f1 , CMIP6\n... (abbreviated)\n
    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#34-downloadlink-shapefiles-files-locally","title":"3.4 Download/link shapefiles files locally","text":"

    You can download the shapefiles that are needed to run ilamb and cartopy offline here:

    • For Land: https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/physical/ne_110m_land.zip
    • For Ocean: https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/physical/ne_110m_ocean.zip

    Finally, you need to define that path as CARTOPY_DATA_DIR via

    export CARTOPY_DATA_DIR=/absolute/path/to/shapefiles/directory\n

    Note: For NCI, we already provide shapefiles in a directory as part of project xp65. After joining the project, you can thus easily use

    export CARTOPY_DATA_DIR=/g/data/xp65/public/apps/cartopy-data\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#4-run-ilamb","title":"4. Run ilamb","text":""},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#41-ilamb-run","title":"4.1 ilamb-run","text":"

    Now that we have the configuration file set up, you can run the study using the ilamb-run script. Executing the following command at the $ILAMB_ROOT directory will run ilamb as specified in your sample.cfg for all models of the model_root and all regions (global) of the world:

    ilamb-run --config sample.cfg --model_root $ILAMB_ROOT/MODELS/ --regions global\n
    If you are on some institutional resource, you may need to launch the above command using a submission script, or request an interactive node. As the script runs, it will yield output which resembles the following:
    Searching for model results in /Users/ncf/sandbox/ILAMB_sample/MODELS/\n\n                                          CLM40cn\n\nParsing config file sample.cfg...\n\n                   SurfaceUpwardSWRadiation/CERES Initialized\n                                     Albedo/CERES Initialized\n\nRunning model-confrontation pairs...\n\n                   SurfaceUpwardSWRadiation/CERES CLM40cn              Completed  37.3 s\n                                     Albedo/CERES CLM40cn              Completed  44.7 s\n\nFinishing post-processing which requires collectives...\n\n                   SurfaceUpwardSWRadiation/CERES CLM40cn              Completed   3.3 s\n                                     Albedo/CERES CLM40cn              Completed   3.3 s\n\nCompleted in  91.8 s\n
    What happened here? First, the script looks for model results in the directory you specified in the --model_root option. It will treat each subdirectory of the specified directory as a separate model result. Here since we only have one such directory, CLM40cn, it found that and set it up as a model in the system. Next it parsed the configure file we examined earlier. We see that it found the CERES data source for both variables as we specified it. If the source data was not found or some other problem was encountered, the green Initialized will appear as red text which explains what the problem was (most likely MisplacedData). If you encounter this error, make sure that ILAMB_ROOT is set correctly and that the data really is in the paths you specified in the configure file.

    Next we ran all model-confrontation pairs. In our parlance, a confrontation is a benchmark observational dataset and its accompanying analsys. We have two confrontations specified in our configure file and one model, so we have two entries here. If the analysis completed without error, you will see a green Completed text appear along with the runtime. Here we see that albedo took a few seconds longer than rsus, presumably because we had the additional burden of reading in two datasets and combining them.

    The next stage is the post-processing. This is done as a separate loop to exploit some parallelism. All the work in a model-confrontation pair is purely local to the pair. Yet plotting results on the same scale implies that we know the maxmimum and minimum values from all models and thus requires the communcation of this information. Here, as we are plotting only over the globe and not extra regions, the plotting occurs quickly.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#42-run-specific-models-and-regions","title":"4.2 Run specific models and regions","text":"

    As mentioned in Section 3.3, you can adjust the models and regions that ilamb will run on. You can find more information in the ilamb tutorial. Calling ilamb-run with both specifications, would look like:

    ilamb-run --config cmip.cfg --model_setup modelroute.txt --regions regions.txt\n
    where you call a specific config file (see Section 3.2) as well as specific model routes and regions with files (see Section 3.3).

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#43-viewing-the-benchmarking-output-in-your-browser","title":"4.3 Viewing the benchmarking output in your browser","text":"

    The whole process generates a directory of results within ILAMB_ROOT which by default is called _build. To view the results locally on your computer, navigate into this directory and start a local http server:

    python -m http.server\n
    You should see a message similar to this (or use http://0.0.0.0:8000/):
    Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...\n
    Open this link in your browser and you will see a webpage with a summary table in the center. As we have so few variables and a single model at this point, the table will very simple:

    As we add more variables and models, this summary table helps you understand relative differences in scores among models. For now, clicking on a row of the table will expand it to reveal the underlying datasets used. Clicking on CERES will take you to another page which presents detailed scores and plots.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#5-run-ilamb-on-nci","title":"5. Run ilamb on NCI","text":"

    If you followed the guides above, you should be familiar with how you can setup ilamb.

    To run ilamb on NCI, you can either start an interactive setup Section 5.1 or use a non-interactive Portable Batch System (PBS) job Section 5.2.

    In both cases, you need to again define the variable $ILAMB_ROOT

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#51-ilamb_root-and-datamodel-on-nci","title":"5.1 ILAMB_ROOT and DATA/MODEL on NCI","text":"

    In our default setup, we will place ILAMB_ROOT and the shapefiles for cartopy directly in the home directory. First, you have to create the ILAMB_ROOT directory

    mkdir $PWD/ILAMB_ROOT\n
    You can then simply export their paths after login as:
    export ILAMB_ROOT=$PWD/ILAMB_ROOT\nexport CARTOPY_DATA_DIR=/g/data/xp65/public/apps/cartopy-data\n

    You can of course change the path of the directory, but will need to take this into account for the PBS job by adding a command to change into the $ILAMB_ROOT directory (see PBS setup comments).

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#ilamb_rootdata-on-nci","title":"ILAMB_ROOT/DATA on NCI","text":"

    An extensive colletion of DATA is provided in the kj13 project. You need to have joined the project on NCI to get access to this data.

    To create a symbolic link to this data, use the bash command

    ln -s /g/data/kj13/datasets/ilamb/DATA $ILAMB_ROOT/DATA\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#ilmab_rootmodel-on-nci","title":"ILMAB_ROOT/MODEL on NCI","text":"

    In the future, we will provide a symbolic link to a MODEL catalog for you as well.

    For now, you will need to create the directory $ILAMB_ROOT/MODEL and populate it with symbolic links to specific models yourself.

    In our example, we will use ACCESS-ESM1.5, which is provided on NCI as part of project fs38. You need to have joined the project on NCI to get access to this data.

    To link the ACCESS-ESM1.5 suite files to your $ILAMB_ROOT/MODEL, simply execute the following bash command

    mkdir $ILAMB_ROOT/MODELS\nln -s /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r* $ILAMB_ROOT/MODELS\n

    By the end of Section 5.1.1 and 5.1.2, your $ILAMB_ROOT Directory should look similar to

    $ILAMB_ROOT\n\u251c\u2500\u2500 ...\n\u251c\u2500\u2500 DATA -> /g/data/kj13/datasets/ilamb/DATA\n\u2514\u2500\u2500 MODELS\n    \u251c\u2500\u2500 r10i1p1f1 -> /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r10i1p1f1\n    \u251c\u2500\u2500 ... (abbreviated)\n    \u2514\u2500\u2500 r9i1p1f1 -> /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r9i1p1f1\n

    These different models have a lot of subdirectories, which are important to keep in mind when defining the source parameter in your ilamb .cfg file. Note that the ilamb files will end with *.nc*. For example, one of the *rsus* files for runr10i1p1f1can be found (and used for.cfg` under

    source = /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r1i1p1f1/Amon/rsus/gn/files/d20191115/rsus_Amon_ACCESS-ESM1-5_historical_r1i1p1f1_gn_185001-201412.nc\n
    or shorter
    source = $ILAMB_ROOT/MODELS/r1i1p1f1/Amon/rsus/gn/files/d20191115/rsus_Amon_ACCESS-ESM1-5_historical_r1i1p1f1_gn_185001-201412.nc\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#52-portable-batch-system-pbs-jobs-on-nci","title":"5.2 Portable Batch System (PBS) jobs on NCI","text":"

    The following default PBS file, let's call it ilamb_test.sh can help you to setup your own, while making sure to use the correct project (#PBS -P) to charge your computing cost to:

    #!/bin/bash\n\n#PBS -N default_ilamb\n#PBS -P tm70\n#PBS -q normalbw\n#PBS -l ncpus=1\n#PBS -l mem=32GB           \n#PBS -l jobfs=10GB        \n#PBS -l walltime=00:10:00  \n#PBS -l storage=gdata/xp65+gdata/kj13+gdata/fs38\n#PBS -l wd\n\nmodule use /g/data/xp65/public/modules\nmodule load conda/access-med-0.1\n\nexport ILAMB_ROOT=$PWD/ILAMB_ROOT\nexport CARTOPY_DATA_DIR=/g/data/xp65/public/apps/cartopy-data\n\nilamb-run --config cmip.cfg --model_setup $PWD/modelroute.txt --regions global\n

    If you are not familiar with PBS jobs on NCI, you could find the guide here. In brief: this PBS script (which you can submit via the bash command qsub ilamb_test.sh), will submit a job to Gadi with the job name (#PBS -N) default_ilamb under project (#PBS -P) tm70 with a normal queue (#PBS -q normalbw), for 1 CPU (#PBS -l ncpus=1) with 32 GB RAM (#PBS -l mem=32GB), with an walltime of 10 hours and access to 10 GB local disk space (#PBS -l jobfs=10GB) as well as data storage access to projects xp65, kj13, and fs38 (again, note that you have to be member of both projects on NCI. Upon starting the job, it will change into to the working directory that you started the job from (#PBS -l wd) and load the access-med conda environment. Finally, it will export the $ILAMB_ROOT as well as $ARTOPY_DATA_DIR paths and start an ilamb-run.

    In our example, we actually run the cmip.cfg file from the ilamb config file github repository for files spec

    Note: If your ILAMB_ROOT and CARTOPY_DATA_DIR are not in your directory from where you submitted the job from, then you need to adjust the export commands to their path

    export ILAMB_ROOT=/absolute/path/where/ILAMB_ROOT/actually/is\nexport CARTOPY_DATA_DIR=/absolute/path/where/shapefiles/actually/are\n

    Once the jobs are finished, you can again inspect the outcome as described in Section 4.3

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#6-fix-your-setup-with-ilamb_doctor","title":"6. Fix your setup with ilamb_doctor","text":"

    ilamb_doctor is a script you can use to diagnosing some missing model values or what is incorrect or missing from a given analysis. It takes options similar to ilamb-run and is used in the following way: ```[ILAMB/test]$ ilamb-doctor --config test.cfg --model_root ${ILAMB_ROOT}/MODELS/CLM

    Searching for model results in /Users/ncf/ILAMB//MODELS/CLM

                                   CLM40n16r228\n                               CLM45n16r228\n                               CLM50n18r229\n

    We will now look in each model for the variables in the ILAMB configure file you specified (test.cfg). The color green is used to reflect which variables were found in the model. The color red is used to reflect that a model is missing a required variable.

                           Biomass/GlobalCarbon CLM40n16r228 biomass or cVeg\n                                    ... (abbreviated)\n                        Precipitation/GPCP2 CLM50n18r229 pr\n

    ``` Here we have run the command on some inputs in our test directory. You will see a list of the confrontations we run and the variables which are required or their synonyms. What is missing in this tutorial is the text coloring which will indicate if a given model has the required variables.

    We have finish the introduction of basic ilamb usage. We believe you have some understanding of ilamb and cont wait to use it. if you still have any question or you want some developer level support, you can find more detail in their official tutorial.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima/","title":"Tutorial for using pangeo & COSIMA on Gadi@NCI","text":"

    https://pangeo.io

    "},{"location":"models/","title":"ACCESS Models","text":"

    ACCESS is a family of related computer models or components that represent different parts of the Earth system. ACCESS links various model components through software called couplers to form different Model Configurations.

    "},{"location":"models/#access-model-components","title":"ACCESS Model Components","text":"Atmosphere Land Ocean Sea Ice Aerosols Atmospheric Chemistry Biogeochemistry Land Biogeochemistry Ocean Coupler"},{"location":"models/#access-model-configurations","title":"ACCESS Model Configurations","text":"ACCESS-CM ACCESS Coupled Model (CM) produces physical climate simulations by deploying the atmosphere, ocean, and sea-ice components. ACCESS-CM features improved fluid dynamics and a microphysical aerosol scheme. ACCESS-ESM ACCESS Earth System Model (ESM) simulates the carbon and other bio-chemical cycles within the climate system, by deploying the atmosphere, ocean, and sea-ice components. ACCESS-ESM is one of the two ACCESS global coupled model versions. ACCESS-OM ACCESS Ocean Model (OM) deploys the ocean and sea-ice components to provide the Australian climate community with ocean weather and climate data, including seasonal forecasting, climate variability, downscaling of climate in the marine environment around Australia, and ocean biogeochemistry."},{"location":"models/configurations/","title":"ACCESS Configurations","text":"

    The Configurations section is still in development.

    What to expect in the next few months?

    The ACCESS-NRI will release model configurations and experiments to the community, including a reference model output for each experiment.

    The model configurations and experiments will be documented in this section as they are released by the ACCESS-NRI. The information will cover topics such as where to find a configuration or experiment, how to use a configuration to run your own experiment, where to find the data produced by a released experiment.

    "},{"location":"models/configurations/access-am/","title":"ACCESS-AM {{ supported }}","text":"

    The ACCESS-AM model is a coupled model between the atmosphere and the land. The atmospheric model component is the UM model. The UM model comes by default coupled to the JULES land model. That is why the first configurations and experiments released of ACCESS-AM will be UM-JULES configurations. But the ACCESS-NRI is working to ensure subsequent releases of ACCESS-AM use the CABLE land model instead.

    "},{"location":"models/configurations/access-am/#getting-started-information","title":"Getting started information","text":"

    On this page, you will find information on how to gain access to the UM model and start using the model. You will also find links to various configurations and experiments you can use as a basis to design your experiment.

    "},{"location":"models/configurations/access-am/#configurations","title":"Configurations","text":""},{"location":"models/configurations/access-am/#experiments","title":"Experiments","text":"

    Some experiments already run with the UM are listed on:

    • CLEX CMS wiki
    "},{"location":"models/configurations/access-cm/","title":"ACCESS-CM {{ supported }}","text":"ACCESS-CM2 (ACCESS Coupled Model 2) is a global fully-coupled climate model that includes the atmosphere, ocean and sea-ice components, and produces physical climate simulations. ACCESS-CM2 is one of the two models run by the Australian climate community for the Coupled Model Intercomparison Project, CMIP."},{"location":"models/configurations/access-cm/#access-cm2-configurations","title":"ACCESS-CM2 configurations","text":"
    • Atmosphere model (UM10.6): N96 resolution (1.875\u00b0 x 1.25\u00b0, 85 levels). Physical model only \u2013 no carbon cycle.

    • Land surface model (CABLE2.5)

    • Ocean model (MOM5): Tripolar grid, 1\u00b0 resolution, 50 levels.

    • Sea ice model (CICE5.1)

      COMPONENT MODEL VERSION Atmosphere UM 10.6 Land Surface CABLE 2.5 (integrated in UM) Ocean MOM 5 Sea Ice CICE 5.1 Coupler OASIS-MCT 3

    ACCESS-NRI will release an ACCESS-CM model configuration. The first release of ACCESS-CM will be derived from the CSIRO ACCESS-CM2 configuration and will include atmosphere, land, ocean and sea ice components.

    "},{"location":"models/configurations/access-cm/#access-cm2-recommended","title":"ACCESS-CM2 {{ recommended }}","text":"

    Citation 1 | Tutorial

    ACCESS-CM2 1 is one of Australia\u2019s contributions to the World Climate Research Programme\u2019s Coupled Model Intercomparison Project Phase 6 (CMIP6). The component models are: UM10.6 GA7.1 for the atmosphere, CABLE2.5 for the land surface, MOM5 for the ocean, and CICE5.1.2 for the sea ice. Compared to previous model versions ACCESS-CM2 shows better global hydrological balance, more realistic ocean water properties (in terms of spatial distribution) and meridional overturning circulation in the Southern Ocean but a poorer simulation of the Antarctic sea ice and a larger energy imbalance at the top of atmosphere. This energy imbalance reflects a noticeable warming trend of the global ocean over the spin-up period.

    1. Daohua Bi, Martin Dix, Simon Marsland, Siobhan O'Farrell, Arnold Sullivan, Roger Bodman, Rachel Law, Ian Harman, Jhan Srbinovsky, Harun A Rashid, Peter Dobrohotoff, Chloe Mackallah, Hailin Yan, Anthony Hirst, Abhishek Savita, Fabio Boeira Dias, Matthew Woodhouse, Russell Fiedler, and Aidan Heerdegen. Configuration and spin-up of ACCESS-CM2, the new generation Australian Community Climate and Earth System Simulator coupled model. Journal of Southern Hemisphere Earth Systems Science, 70(1):225\u2013251, 2020.\u00a0\u21a9\u21a9

    "},{"location":"models/configurations/access-esm/","title":"ACCESS-ESM {{ supported }}","text":"

    ACCESS-ESM stands for ACCESS Earth System Model. Earth system model means it is a fully-coupled model that includes carbon cycle components.

    ACCESS-NRI will release an ACCESS-ESM model configuration. The first release of ACCESS-ESM will be derived from the CSIRO ACCESS-ESM1.5 configuration and will include atmosphere, land and land biogeochemistry, ocean and ocean biogeochemistry, and sea ice components.

    "},{"location":"models/configurations/access-esm/#access-esm15-recommended","title":"ACCESS-ESM1.5 {{ recommended }}","text":"

    Citation 1

    ACCESS Training Workshop (AMOS 2021)

    Webinar: Getting Started with ACCESS-CM2 and ACCESS-ESM1.5

    ACCESS-ESM1.5 1 is a fully-coupled climate model with land and ocean carbon cycle components. ACCESS-ESM1.5 has mainly been developed to enable Australia to participate in the Coupled Model Intercomparison Project Phase 6 (CMIP6) with an ESM version. An assessment of the climate response to CO2 forcing indicates that ACCESS-ESM1.5 has an equilibrium climate sensitivity of 3.87\u00b0C.

    1. Tilo Ziehn, Matthew A Chamberlain, Rachel M Law, Andrew Lenton, Roger W Bodman, Martin Dix, Lauren Stevens, Ying-Ping Wang, and Jhan Srbinovsky. The Australian Earth System Model: ACCESS-ESM1.5. Journal of Southern Hemisphere Earth Systems Science, 70(1):193\u2013214, 2020.\u00a0\u21a9\u21a9

    "},{"location":"models/configurations/access-om/","title":"ACCESS-OM {{ supported }}","text":"

    The ACCESS Ocean Model, ACCESS-OM, is a global coupled ocean and sea ice configuration. It couples the ocean and sea ice components via a coupler. The atmospheric fields that drive the model are provided by a data product, usually derived from reanalysis.

    ACCESS-NRI will release supported ACCESS-OM configurations. The first release will be derived from the COSIMA ACCESS-OM2 suite and will include ocean and sea ice components.

    "},{"location":"models/configurations/access-om/#access-om2-recommended","title":"ACCESS-OM2 {{ recommended }}","text":"

    Citation 1 | Documentation

    ACCESS-OM2 1 is a suite of coupled ocean-sea ice models developed by the Consortium for Ocean-Sea Ice Modelling in Australia (COSIMA). All models use the MOM5 ocean model coupled to the CICE5 sea ice model via an OASIS3-MCT coupler.

    The models in the ACCESS-OM2 suite differ by their grid spatial resolution:

    • ACCESS-OM2 at 1\u00b0 with 50 vertical levels
    • ACCESS-OM2-025 at 0.25\u00b0 with 50 vertical levels
    • ACCESS-OM2-01 at 0.1\u00b0 with 75 vertical levels
    1. A. E. Kiss, A. McC. Hogg, N. Hannah, F. Boeira Dias, G. B. Brassington, M. A. Chamberlain, C. Chapman, P. Dobrohotoff, C. M. Domingues, E. R. Duran, M. H. England, R. Fiedler, S. M. Griffies, A. Heerdegen, P. Heil, R. M. Holmes, A. Klocker, S. J. Marsland, A. K. Morrison, J. Munroe, M. Nikurashin, P. R. Oke, G. S. Pilo, O. Richet, A. Savita, P. Spence, K. D. Stewart, M. L. Ward, F. Wu, and X. Zhang. ACCESS-OM2 v1.0: a global ocean\u2013sea ice model at three resolutions. Geoscientific Model Development, 13(2):401\u2013442, 2020. URL: https://gmd.copernicus.org/articles/13/401/2020/, doi:10.5194/gmd-13-401-2020.\u00a0\u21a9\u21a9

    "},{"location":"models/configurations/access-s/","title":"ACCESS-S {{ community }}","text":"

    ACCESS-S is the Bureau of Meteorology's climate modelling system used for seasonal forecasting.

    This coupled model uses a different set of model components than the other ACCESS models:

    • UM for the atmosphere
    • JULES for the land
    • NEMO for the ocean
    • CICE for the sea-ice
    • OASIS3-MCT for the coupler
    "},{"location":"models/model_components/","title":"Model Components","text":"

    ACCESS components represent different chemical, physical or biological parts of the Earth System.

    Atmosphere Land Ocean Sea Ice Aerosols Atmospheric Chemistry Biogeochemistry Land Biogeochemistry Ocean Coupler

    Most model components have originated from collaborations with international research groups. These include:

    • UK Met Office \u2192 Unified Model (UM) atmospheric component.
    • NOAA/ Geophysical Fluid Dynamics Laboratory (GFDL) \u2192 Modular Ocean Model (MOM) component.
    • Los Alamos National Laboratory (LANL) \u2192 Sea ice model (CICE) component.
    • Centre Europ\u00e9en de Recherche et de Formation Avanc\u00e9e en Calcul Scientifique (CERFACS) \u2192 Ocean-Atmosphere-Sea Ice-Snowpack Model Coupling Toolkit (OASIS3-MCT).
    • United Kingdom Chemistry and Aerosols (UKCA) \u2192 UK community atmospheric Chemistry-Aerosol (UKCA) model.
    • CSIRO, COSIMA and CLEX \u2192 Community Atmosphere Biosphere Land Exchange (CABLE), Whole Ocean Model with Biogeochemistry And Trophic-dynamics (WOMBAT) and land biogeochemistry Carnegie\u2013Ames\u2013Stanford-Approach (CASA) models. These models have been developed in Australia.
    "},{"location":"models/model_components/aerosols_atmospheric_chemistry/","title":"Aerosol and Atmospheric Chemistry Components","text":""},{"location":"models/model_components/aerosols_atmospheric_chemistry/#ukca-supported","title":"UKCA {{ supported }}","text":"

    The UK Chemistry-Aerosol model (UKCA) is a community atmospheric chemistry-aerosol global model developed in the United Kingdom. It is suitable for a range of topics in climate and environmental change research.

    "},{"location":"models/model_components/aerosols_atmospheric_chemistry/#how-is-ukca-used","title":"How is UKCA used?","text":"

    UKCA chemistry model is enabled in ACCESS-CM2-Chem.

    "},{"location":"models/model_components/aerosols_atmospheric_chemistry/#glomap-supported","title":"GLOMAP {{ supported }}","text":"

    UKCA contains an aerosol scheme GLObal Model of Aerosol Processes (GLOMAP) that can be used independently. The multi-component, multi-modal GLOMAP model allows simulation of aerosol number, size and concentrations of individual components such as sulphate,sea salt and different types of carbon.

    "},{"location":"models/model_components/aerosols_atmospheric_chemistry/#how-is-glomap-used","title":"How is GLOMAP used?","text":"

    GLOMAP is used in ACCESS-CM2 and ACCESS-CM2-Chem.

    "},{"location":"models/model_components/atmosphere/","title":"Atmospheric Model Component","text":""},{"location":"models/model_components/atmosphere/#the-unified-model-um-supported","title":"The Unified Model (UM) {{ supported }}","text":"

    The Unified Model (UM) is a numerical model of the atmosphere used for both weather and climate applications, developed by the Met Office in the United Kingdom (UK). It includes solutions of the equations of atmospheric fluid dynamics with advanced parameterizations of subgrid-scale physical processes like convection, cloud formation and atmospheric radiation.

    The Unified Model gets its name because a single model is used across a wide range of both timescales (nowcasting to centennial) and spatial scales (sub km convective scale to global climate modelling).

    The UM is used by several international operational meteorology and research organizations and these contribute towards its development through the UM partnership.

    "},{"location":"models/model_components/atmosphere/#how-is-the-um-used","title":"How is the UM used?","text":"

    The UM Model component represents the atmosphere in many of the ACCESS Models used at regional and global scales.

    The ACCESS-CM2 climate model and ACCESS-ESM1-5 earth system model use versions of the UM as their atmospheric components.

    The Australian Bureau of Meteorology operational 12 km spatial resolution global forecasting system uses the Unified Model, as part of ACCESS for:

    • Forecasting of extreme events and emergencies such as heatwaves, bushfires, cyclones, floods, coral bleaching, sea-level rise, coastal inundation and more.
    • Daily and seasonal weather forecasts
    "},{"location":"models/model_components/atmosphere/#useful-links","title":"Useful links","text":"

    STASH register: Metadata reference for the outputs variables.

    "},{"location":"models/model_components/bgc_land/","title":"Biogeochemistry Land","text":""},{"location":"models/model_components/bgc_land/#casa-cnp-supported","title":"CASA-CNP {{ supported }}","text":"

    CASA (Carnegie-Ames-Stanford Approach)-CNP (Carbon-Nitrogen-Phosphorous) is the land biogeochemistry model developed in CABLE.

    CASA-CNP models the dynamics of carbon pools and nitrogen and phosphorous limitations. It is directly coupled with the CABLE land surface model.

    "},{"location":"models/model_components/bgc_land/#how-is-casa-cnp-used","title":"How is CASA-CNP used?","text":"

    CASA-CNP is switched on for carbon-cycle to use in the ACCESS-ESM1.5 model.

    "},{"location":"models/model_components/bgc_ocean/","title":"Biogeochemistry Ocean","text":""},{"location":"models/model_components/bgc_ocean/#wombat-supported","title":"WOMBAT {{ supported }}","text":"

    WOMBAT is the ocean carbon model (World Ocean Model of Biogeochemistry And Trophic-dynamics), developed in Australia. It calculates the carbon fluxes of the ocean, by simulating the evolution of phosphate, oxygen, dissolved inorganic carbon, alkalinity and iron with one class of phytoplankton and zooplankton.

    WOMBAT is a Nutrient, Phytoplankton, Zooplankton and Detritus (NPZD) model, with one zooplankton and one phytoplankton class.

    "},{"location":"models/model_components/bgc_ocean/#how-is-wombat-used","title":"How is WOMBAT used?","text":"

    WOMBAT is coupled to the MOM5 ocean model in the ACCESS-ESM1.5 and ACCESS-OM2 models.

    "},{"location":"models/model_components/coupler/","title":"Coupler {{ supported }}","text":"

    A coupler is a software used to perform simulations with different model components at the same time. The coupler enables the different model components to exchange information during the simulation.

    "},{"location":"models/model_components/coupler/#oasis3-mct-supported","title":"OASIS3-MCT {{ supported }}","text":"

    OASIS3-MCT consists of the OASIS coupler interfaced with the Model Coupling Toolkit (MCT) from the Argonne National Laboratory. OASIS3-MCT is the coupler used for:

    • ACCESS-ESM1.5
    • ACCESS-CM2
    • ACCESS-OM2
    • ACCESS-S1
    "},{"location":"models/model_components/coupler/#nuopc-interoperability-layer-recommended","title":"NUOPC interoperability layer {{ recommended }}","text":"

    The NUOPC interoperability layer is distributed via the Earth System Modelling Framework (ESMF). It is a coupler developed by the National Unified Operational Prediction Capability (NUOPC), a consortium of Navy (USA), NOAA and Air Force (USA) modelers.

    "},{"location":"models/model_components/land/","title":"Land Model Components","text":""},{"location":"models/model_components/land/#cable-supported","title":"CABLE {{ supported }}","text":"

    Community Atmosphere Biosphere Land Exchange (CABLE) is a land surface model, used to calculate the fluxes of momentum, energy, water and carbon between the land surface and the atmosphere. It also models the main biogeochemical cycles of the land ecosystem when used in conjunction with the CASA-CNP module.

    "},{"location":"models/model_components/land/#how-is-cable-used","title":"How is CABLE used?","text":"

    CABLE can be run as a standalone model, for a single location, a region or globally. Coupled to the Met Office Unified Model (UM), CABLE provides the land surface component of the ACCESS Earth System Model (ACCESS-ESM) and ACCESS Coupled Model (ACCESS-CM).

    CABLE is an open source model developed by a community of Australian climate science researchers. Registration is required to access the CABLE code repository.

    "},{"location":"models/model_components/land/#jules-supported","title":"JULES {{ supported }}","text":"

    The Joint UK Land Environment System (JULES) is a community land surface model that can be used both as a standalone model and as the land surface component in the UM model. By modelling different land surface processes (surface energy balance, hydrological cycle, carbon cycle, dynamic vegetation, etc.) and their interaction with each other, JULES provides a framework to assess the impact of modifying a particular process on the ecosystem as a whole, e.g., the impact of climate change on hydrology.

    "},{"location":"models/model_components/ocean/","title":"Ocean Model Component","text":""},{"location":"models/model_components/ocean/#modular-ocean-model-mom-supported","title":"Modular Ocean Model (MOM) {{ supported }}","text":"

    The Modular Ocean Model (MOM) is one of the ocean components of the ACCESS climate model system. Used to simulate ocean currents at both regional and global scales, MOM is an invaluable tool for studying the global ocean climate system, as well as capabilities for regional and coastal applications.

    "},{"location":"models/model_components/ocean/#mom5-supported","title":"MOM5 {{ supported }}","text":"

    Source Code

    MOM5 is used in supported climate ACCESS configurations.

    "},{"location":"models/model_components/ocean/#mom6-recommended","title":"MOM6 {{ recommended }}","text":"

    Source Code | Tutorials

    The most recent version, MOM6, is an open source development by a consortium of scientists across several government agencies and academic institutions with critical contributions provided by researchers worldwide.

    "},{"location":"models/model_components/sea-ice/","title":"Sea-Ice Model Component","text":""},{"location":"models/model_components/sea-ice/#cice-supported","title":"CICE {{ supported }}","text":"

    CICE is a numerical model for simulating the growth, melting and movement of polar sea ice. This software package was developed by researchers at Los Alamos National Laboratory team and is currently managed by the CICE Consortium, an international group of institutions formed to maintain and develop CICE in the public domain.

    CICE5 is the current version used in ACCESS model configurations.

    CICE6 is currently under development.

    "},{"location":"models/run-a-model/","title":"Running a Model","text":"

    Here, we are providing the information to run different ACCESS models.

    If Model, Model Component or Model Configuration are not familiar terms for you, please check out our Model overview.

    If you have not run a model before, our Getting Started Guide will give you the basics to access the Model infrastructure on the high-performance-computing facility Gadi@NCI.

    Detailed guides for the different Model configurations can then be found on the following pages: - Run ACCESS-ESM for the ACCESS Earth System Model configurations - Run ACCESS-CM for the ACCESS Coupled Model configurations - Run ACCESS-AM for the ACCESS Atmosphere Model configurations - Run ACCESS-OM for the ACCESS Ocean Model configurations

    "},{"location":"models/run-a-model/run-access-cm/","title":"Running ACCESS-CM2 Model","text":"

    This section includes a step-by-step instruction set on how to run the ACCESS-CM2 suite.

    It is also built as a future point of reference, where you can come back and find the section containing the information you need.

    "},{"location":"models/run-a-model/run-access-cm/#getting-started","title":"Getting Started","text":"
    • An institutional email address with an organisation that allows access to NCI (e.g., CSIRO, a university, etc.).
    • Access to NCI compute/storage.
    • A Linux/Mac/Unix computer with an internet connection and a command line terminal (e.g., MacOS with XQuartz and command line tools installed, or Putty Cygwin/MobaXterm/similar X-Windows compatible program on a PC).
    "},{"location":"models/run-a-model/run-access-cm/#requirements-for-running-access-cm-suites","title":"Requirements for running ACCESS-CM suites","text":"

    Here, we assume that you already have access to Gadi, the supercomputer hosted by the National Computational Infrastructure (NCI). If needed, you can go back to our guide on how to get access to Gadi.

    "},{"location":"models/run-a-model/run-access-cm/#basic-setup","title":"Basic Setup","text":"

    To run an ACCESS-CM suite, new users will need to:

    • Join the ACCESS group. You can also find instructions on how to join a particular project through the NCI self-service portal.
    • Connect to accessdev to complete your setup once you have your NCI credentials and are a member of the ACCESS group. Note: At present, both accessdev and ARE run the models on Gadi. However, ARE only supports shorter-running suites (i.e., runs less than 48 hours). Work is currently in progress to fully transition the cylc workflows from accessdev virtual machine to the ARE.
    • Additional steps relating to the communication between accessdev and Gadi may also be necessary.
    "},{"location":"models/run-a-model/run-access-cm/#uk-met-office-environment-on-nci","title":"UK Met Office Environment on NCI","text":"

    As components within the ACCESS-CM suites use the UK Met Office model code, the UK Met Office Environment is installed on NCI. This comprises the model software and tools as well as the cylc workflow system, rose suites, the Met Office MOSRS repository and our local replica repository. In order to checkout and run ACCESS-CM2 suites on Gadi using Rose/Cylc, you need to have access to a number of repositories at the Met Office as well as the local replica and local software on NCI, which will require fullfilling these prerequisites.

    "},{"location":"models/run-a-model/run-access-cm/#met-office-science-repository-service-mosrs","title":"Met Office Science Repository Service (MOSRS)","text":"

    Met Office Science Repository Service (MOSRS) is a Trac server run by the UK Met Office for sharing code and configurations for the climate models it runs with partners. It contains the source code and configurations for the UM and JULES amongst other things.

    To apply for a MOSRS account, you should contact your local institutional sponsor.

    "},{"location":"models/run-a-model/run-access-cm/#preparing-to-run-an-access-cm-suite","title":"Preparing to run an ACCESS-CM suite","text":"

    At this stage, you should be able to connect to accessdev and Gadi.

    accessdev is a frontend system where you prepare ACCESS jobs and then submit them to Gadi (the supercomputer at NCI where ACCESS is run).

    "},{"location":"models/run-a-model/run-access-cm/#logging-in-to-gadi-and-accessdev","title":"Logging in to Gadi and accessdev","text":"

    To run an ACCESS-CM2 suite (i.e., job), you need to first login to Gadi with your username through a login node.

    ssh -Y username@gadi.nci.org.au

    Similarly, to login to accessdev:

    ssh -Y $USER@accessdev.nci.org.au

    Aliases and shortcuts can be created to simplify these commands by configuring SSH.

    "},{"location":"models/run-a-model/run-access-cm/#copy-edit-and-run-an-access-cm2-suite","title":"Copy, Edit, and Run an ACCESS-CM2 suite","text":"ACCESS-CM2 is a set of submodels (eg. UM, MOM, CICE, CABLE, OASIS) with a range of model parameters, input data, and computer related information, that need to be packaged together as a suite in order to run. Each ACCESS-CM2 suite has an ID, in the format u-<suite-name>, with <suite-name> being a unique identifier (e.g. u-br565 is the CMIP6 release preindustrial experiment suite). Typically, an existing suite is copied and then edited as needed for a particular run."},{"location":"models/run-a-model/run-access-cm/#copy-access-cm2-suites-with-rosie","title":"Copy ACCESS-CM2 suites with Rosie","text":"Rosie is an SVN repository wrapper with a set of options to work with ACCESS-CM2 suites. To copy an existing suite, on accessdev:
    1. Run mosrs-auth to authenticate using your MOSRS credentials: mosrs-auth Please enter the MOSRS password for <MOSRS-username>: Successfully authenticated with MOSRS as <MOSRS-username>
    2. Run rosie checkout <suite-ID> to create a local copy of the <suite-ID> from the UKMO repository (used mostly for testing and examining existing suites): rosie checkout <suite-ID> [INFO] create: /home/565/<$USER>/roses [INFO] <suite-ID>: local copy created at /home/565/<$USER>/roses/<suite-ID> Alternatively, run rosie copy <suite-ID> to create a new full copy (local and remote in the UKMO repository) rather than just a local copy. When a new suite is created in this way, a new unique name is generated within the repository, and populated with some descriptive information about the suite along with all the initial configuration details: rosie copy <suite-ID> Copy \"<suite-ID>/trunk@<trunk-ID>\" to \"u-?????\"? [y or n (default)] y [INFO] <new-suite-ID>: created at https://code.metoffice.gov.uk/svn/roses-u/<suite-n/a/m/e/> [INFO] <new-suite-ID>: copied items from <suite-ID>/trunk@<trunk-ID> [INFO] <suite-ID>: local copy created at /home/565/<$USER>/roses/<new-suite-ID>
    For additional rosie options, run rosie help. The suites are created in the user's accessdev home directory, under ~/roses/<suite-ID>. The suite directory usually contains 2 subdirectories and 3 files:
    • app \u2192 directory containing the configuration files for the various tasks within the suite.
    • meta \u2192 directory containing the GUI metadata.
    • rose-suite.conf \u2192 the main suite configuration file.
    • rose-suite.info \u2192 suite information file.
    • suite.rc \u2192 the Cylc control script file (Jinja2 language).
    • ls ~/roses/<suite-ID> app meta rose-suite.conf rose-suite.info suite.rc
    "},{"location":"models/run-a-model/run-access-cm/#edit-an-access-cm2-suite-configuration-with-rose-gui","title":"Edit an ACCESS-CM2 suite configuration with Rose GUI","text":"Rose is a configuration editor which can be used to view, edit, or run an ACCESS-CM2 suite. To edit a suite configuration, on accessdev:
    1. Run rose edit & (the & is optional and keeps the terminal prompt active while runs the GUI as a separate process) from inside the relevant suite directory (e.g. ~/roses/<suite-ID>) to open the Rose GUI and inspect the suite information. cd ~/roses/<suite-ID> rose edit & [<N>] <PID>
    2. There are many settings that can be changed in a Rose GUI. However, there are a few that we definitely want to check and edit before we run a suite:
      • NCI Project To make sure we run the suite under the NCI project we belong to, we can navigate to suite conf \u2192 Machine and Runtime Options, edit the Compute project field, and click the Save button . (Check how to connect to a project if you have not joined one yet). If, for example, we belong to the tm70 Project (ACCESS-NRI), we will insert tm70 in the Compute project field:
      • Total Run length / Cycling frequency ACCESS-CM2 suites are often run in multiple steps, each of them constituting a cycle, with the job scheduler resubmitting the suite every chosen Cycling frequency, until the Total Run length is met. To modify these parameters, we can navigate to suite conf \u2192 Run Initialisation and Cycling, edit the respective fields, and click the Save button . The values are in the ISO 8601 Duration format. If, for example, we want to run the suite for a total of 50 Years, and resubmit every year, we will change Total Run length to P50Y and Cycling frequency to P1Y. Note that the current maximum Cycling frequency is 2 years:
      • Wallclock time The Wallclock time is the time requested by the PBS job to run a single cycle. If this time is not enough for the suite to end its cycle, our job will be terminated before the suite can complete the run. If we change the Cycling frequency, we might need to change the Wallclock time accordingly. The time needed for the suite to run a full cycle depends on several factors, but a good estimation can be 4 hours per simulated year. To modify the Wallclock time, we can navigate to suite conf \u2192 Run Initialisation and Cycling, edit the respective field, and click the Save button . The value is in the ISO 8601 Duration format.
    "},{"location":"models/run-a-model/run-access-cm/#run-an-access-cm2-suite","title":"Run an ACCESS-CM2 suite","text":"After completing all the modifications to the suite, we are ready to run it. ACCESS-CM2 suites run on Gadi through a PBS job submission. When the suite gets run, its configuration files are copied on Gadi under /scratch/$PROJECT/$USER/cylc-run/<suite-ID>, and a symbolic link to this folder is also created in the $USER's home directory under ~/cylc-run/<suite-ID>. An ACCESS-CM2 suite is constituted by several tasks (such as checking out code repositories, compiling and building the different model components, running the model, etc.). The workflow of these tasks is controlled by Cylc. Cylc (pronounced \u2018silk\u2019), is a workflow manager that automatically executes tasks according to the model main cycle script suite.rc. Cylc deals with how the job will be run and manages the time steps of each submodel, as well as monitoring all the tasks and reporting any error that might occur. To run an ACCESS-CM2 suite, on accessdev:
    1. Run rose suite-run, from inside the suite directory, to run the initial tasks.
    2. After these few small tasks get executed, the Cylc GUI will open up and you will be able to see and control all the different tasks in the suite as they are run.
    3. cd ~/roses/<suite-ID> rose suite-run [INFO] export CYLC_VERSION=7.8.3 [INFO] export ROSE_ORIG_HOST=accessdev.nci.org.au [INFO] export ROSE_SITE= [INFO] export ROSE_VERSION=2019.01.2 [INFO] create: /home/565/<$USER>/cylc-run/<suite-ID> [INFO] create: log.<timestamp> [INFO] symlink: log.<timestamp> <= log [INFO] create: log/suite [INFO] create: log/rose-conf [INFO] symlink: rose-conf/<timestamp>-run.conf <= log/rose-suite-run.conf [INFO] symlink: rose-conf/<timestamp>-run.version <= log/rose-suite-run.version [INFO] install: rose-suite.info \u2003\u2003\u2003\u2003source: /home/565/<$USER>/roses/<suite-ID>/rose-suite.info [INFO] create: app [INFO] install: app \u2003\u2003\u2003\u2003source: /home/565/<$USER>/roses/<suite-ID>/app [INFO] create: meta [INFO] install: meta \u2003\u2003\u2003\u2003source: /home/565/<$USER>/roses/<suite-ID>/meta [INFO] install: suite.rc [INFO] REGISTERED <suite-ID> -> /home/565/<$USER>/cylc-run/<suite-ID> [INFO] create: share [INFO] install: share [INFO] create: work [INFO] chdir: log/ [INFO] \u2003\u2003\u2003\u2003\u2003\u2003\u2009\u2009._. [INFO] \u2003\u2003\u2003\u2003\u2003\u2003\u2009\u2009| |\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003The Cylc Suite Engine [7.8.3] [INFO] ._____._. ._| |_____.\u2003\u2003\u2003\u2003\u2003\u2009Copyright (C) 2008-2019 NIWA [INFO] | .___| | | | | .___|\u2003& British Crown (Met Office) & Contributors. [INFO] | !___| !_! | | !___. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [INFO] !_____!___. |_!_____! This program comes with ABSOLUTELY NO WARRANTY; [INFO] \u2003\u2003\u2003\u2009.___! | \u2003\u2003\u2003\u2003\u2003see `cylc warranty`. \u2009It is free software, you [INFO] \u2003\u2003\u2003\u2009!_____! \u2003\u2003\u2003\u2003\u2003\u2009are welcome to redistribute it under certain [INFO] [INFO] *** listening on https://accessdev.nci.org.au:<port>/ *** [INFO] [INFO] To view suite server program contact information: [INFO] $ cylc get-suite-contact <suite-ID> [INFO] [INFO] Other ways to see if the suite is still running: [INFO] $ cylc scan -n '<suite-ID>' accessdev.nci.org.au [INFO] $ cylc ping -v --host=accessdev.nci.org.au <suite-ID> [INFO] $ ps -opid,args <PID> # on accessdev.nci.org.au TO DO --> Add Rose GUI image
    You are done!! If you don't get any errors, you will be able to check the suite output files after the run is complete. Note that, at this stage, it is possible to close the Cylc GUI. If you want to open it again, just run rose suite-gcontrol from inside the suite directory."},{"location":"models/run-a-model/run-access-cm/#check-for-errors","title":"Check for errors","text":"It is quite common, especially during the first few runs, to experience errors and job failures. An ACCESS-CM2 suite is constituted by several tasks, and each of these tasks could fail. When a task fails, the suite is halted and you will see a red icon next to the respective task name in the Cylc GUI. To investigate the cause of a failure, we need to look at the logs (job.err and job.out) from the suite run. There are two main ways to do so:
    • Using the Cylc GUI Right-click on the task that failed and click on View Job Logs (Viewer) \u2192 job.err or job.out. To access the specific task you might have to click on the arrow next to the task, to extend the drop-down menu with all the sub-taks.
    • In the ~/cylc-run/<suite-ID> directory The suite logs directories are stored inside ~/cylc-run/<suite-ID> as log.<TIMESTAMP>, with the lastest set of logs also symlinked in the ~/cylc-run/<suite-ID>/log directory. The logs for the main job are inside the ~/cylc-run/<suite-ID>/log/job directory. Logs are separated into simulation cycles through their starting dates, and then differentiated by task. They are then further separated into \"attempts\" (consecutive failed/successful tasks), with NN being a symlink to the most recent attempt. In our example, the failure occurred for the 09500101 simulation cycle (starting date on 1st January 950) in the coupled task. Therefore, the directory where to find the job.err and job.out files is ~/cylc-run/<suite-ID>/log/job/09500101/coupled/NN. cd ~/cylc-run/<suite-ID> ls app cylc-suite.db log log.20230530T051952Z meta rose-suite.info share suite.rc suite.rc.processed work cd log ls db job rose.conf rose-suite-run.conf rose-suite-run.locs rose-suite-run.log rose-suite-run.version suite suiterc cd job ls 09500101 cd 09500101 ls coupled fcm_make2_um fcm_make_um install_warm make2_mom make_mom fcm_make2_drivers fcm_make_drivers install_ancil make2_cice make_cice cd coupled ls 01 02 03 NN cd NN ls job job-activity.log job.err job.out job.status
    "},{"location":"models/run-a-model/run-access-cm/#stop-restart-and-reload-suites","title":"Stop, restart and reload suites","text":"Sometimes, you may want to control the running state of a suite. If your Cylc GUI has been closed and you are unsure whether your suite is still running, you can scan for active suites and reopen the GUI if desired. To scan for active suites run cylc scan. To reopen the Cylc GUI there are 2 main ways:
    • run rose suite-gcontrol from inside the suite directory
    • OR
    • run gcylc <suite-ID>
    cylc scan <suite-ID> <$USER>@accessdev.nci.org.au:<port> cd ~/roses/<suite-ID> rose suite-gcontrol Add Rose GUI image -->"},{"location":"models/run-a-model/run-access-cm/#stop-a-suite","title":"STOP a suite","text":"Run rose suite-stop -y, from inside the suite directory, to shutdown a suite in a safe manner."},{"location":"models/run-a-model/run-access-cm/#restart-a-suite","title":"RESTART a suite","text":"There are two main ways to restart a suite:
    • 'SOFT' restart Run rose suite-run --restart, from inside the suite directory, to re-install the suite and reopen Cylc in the same state as when it was stopped (you may need to manually trigger failed tasks from the Cylc GUI). cylc cd ~/roses/<suite-ID> rose suite-run --restart [INFO] export CYLC_VERSION=7.8.3 [INFO] export ROSE_ORIG_HOST=accessdev.nci.org.au [INFO] export ROSE_SITE= [INFO] export ROSE_VERSION=2019.01.2 [INFO] delete: log/rose-suite-run.conf [INFO] symlink: rose-conf/<timestamp>-restart.conf <= log/rose-suite-run.conf [INFO] delete: log/rose-suite-run.version [INFO] symlink: rose-conf/<timestamp>-restart.version <= log/rose-suite-run.version [INFO] chdir: log/ [INFO] \u2003\u2003\u2003\u2003\u2003\u2003\u2009\u2009._. [INFO] \u2003\u2003\u2003\u2003\u2003\u2003\u2009\u2009| |\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003The Cylc Suite Engine [7.8.3] [INFO] ._____._. ._| |_____.\u2003\u2003\u2003\u2003\u2003\u2009Copyright (C) 2008-2019 NIWA [INFO] | .___| | | | | .___|\u2003& British Crown (Met Office) & Contributors. [INFO] | !___| !_! | | !___. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [INFO] !_____!___. |_!_____! This program comes with ABSOLUTELY NO WARRANTY; [INFO] \u2003\u2003\u2003\u2009.___! | \u2003\u2003\u2003\u2003\u2003see `cylc warranty`. \u2009It is free software, you [INFO] \u2003\u2003\u2003\u2009!_____! \u2003\u2003\u2003\u2003\u2003\u2009are welcome to redistribute it under certain [INFO] [INFO] *** listening on https://accessdev.nci.org.au:<port>/ *** [INFO] [INFO] To view suite server program contact information: [INFO] $ cylc get-suite-contact <suite-ID> [INFO] [INFO] Other ways to see if the suite is still running: [INFO] $ cylc scan -n '<suite-ID>' accessdev.nci.org.au [INFO] $ cylc ping -v --host=accessdev.nci.org.au <suite-ID> [INFO] $ ps -opid,args <PID> # on accessdev.nci.org.au TO DO --> Add Rose GUI image
    • 'HARD' restart Run rose suite-run --new, from inside the suite directory, if you want to overwrite any previous runs of the suite and begin completely afresh (WARNING!! This will overwrite all existing model output and logs).
    "},{"location":"models/run-a-model/run-access-cm/#reload-a-suite","title":"RELOAD a suite","text":"In some cases the suite needs to be updated without necessarily having to stop it (e.g. after fixing a typo in a file). Updating an active suite is called a 'reload', where the suite is 're-installed' and Cylc is updated with the changes (this is similar to a 'soft' restart, but with the new changes installed, so you may need to manually trigger failed tasks from the Cylc GUI). To reload a suite run rose suite-run --reload from inside the suite directory."},{"location":"models/run-a-model/run-access-cm/#suite-output-files","title":"Suite output files","text":"All output files (as well as work files) are available on Gadi under /scratch/$PROJECT/$USER/cylc-run/<suite-ID> (also symlinked in ~/cylc-run/<suite-ID>). While the suite is running, files move between the share and the work directories. At the end of each cycle, model output data and restart files are moved to /scratch/$PROJECT/$USER/archive/<suite-name>. This directory contains 2 subdirectories:
    • history This is the directory where the model output data is found, separated for each model component:
      • atm \u2192 atmosphere (UM)
      • cpl \u2192 coupler (OASIS3-MCT)
      • ocn \u2192 ocean (MOM)
      • ice \u2192 ice (CICE)
      For the atmospheric output data, each file it is usually a UM fieldsfile or netCDF file, formatted as <suite-name>a.p<output-stream-identifier><year><month-string>. In the case of the u-br565 suite we will have: cd /scratch/<$PROJECT>/<USER>/archive ls br565 <suite-name> <other-suite-name> cd br565 ls history restart ls history/atm br565a.pd0950apr.nc br565a.pd0950aug.nc br565a.pd0950dec.nc br565a.pd0950feb.nc br565a.pd0950jan.nc br565a.pd0950jul.nc br565a.pd0950jun.nc br565a.pd0950mar.nc br565a.pd0950may.nc br565a.pd0950nov.nc br565a.pd0950oct.nc br565a.pd0950sep.nc br565a.pd0951apr.nc br565a.pd0951aug.nc br565a.pd0951dec.nc br565a.pm0950apr.nc br565a.pm0950aug.nc br565a.pm0950dec.nc br565a.pm0950feb.nc br565a.pm0950jan.nc br565a.pm0950jul.nc br565a.pm0950jun.nc br565a.pm0950mar.nc br565a.pm0950may.nc br565a.pm0950nov.nc br565a.pm0950oct.nc br565a.pm0950sep.nc br565a.pm0951apr.nc br565a.pm0951aug.nc br565a.pm0951dec.nc netCDF
    • restart This is the directory where the restart dumps are found, subdivided for each model component (see history folder above). For the atmospheric restart files, each of them is a UM fieldsfile, formatted as <suite-name>a.da<year><month><day>_00. In the directory there are also some files formatted as <suite-name>a.xhist-<year><month><day> containing metadata information. In the case of the u-br565 suite we will have: ls /scratch/<$PROJECT>/<USER>/archive/br565/restart/atm br565a.da09500201_00 br565a.da09510101_00 br565.xhist-09500131 br565.xhist-09501231
    References
    • https://confluence.csiro.au/display/ACCESS/Using+CM2+suites+in+Rose+and+Cylc
    • https://confluence.csiro.au/display/ACCESS/Understanding+CM2+output
    • https://nespclimate.com.au/wp-content/uploads/2020/10/Instruction-document-Getting_started_with_ACCESS.pdf
    • https://code.metoffice.gov.uk/doc/um/latest/um-training/rose-gui.html
    "},{"location":"models/run-a-model/run-access-esm/","title":"Run ACCESS-ESM","text":""},{"location":"models/run-a-model/run-access-esm/#requirements","title":"Requirements","text":"Before running ACCESS-ESM, you need to make sure to possess the right tools and to have an account with specific institutions."},{"location":"models/run-a-model/run-access-esm/#general-requirements","title":"General requirements","text":"For the general requirements needed to run all ACCESS models, please refer to the Getting Started (TO DO check link) page."},{"location":"models/run-a-model/run-access-esm/#model-specific-requirements","title":"Model-specific requirements","text":"
    • Payu To get payu on Gadi, run:
      module use /g/data/hh5/public/modules\n            module load conda/analysis3\n        
    "},{"location":"models/run-a-model/run-access-esm/#get-access-esm-configuration","title":"Get ACCESS-ESM configuration","text":"A suitable ACCESS-ESM pre-industrial configuration is avaible on the coecms GitHub. In order to get it, on Gadi, create a directory where to keep the model configuration, and clone the GitHub repo in it by running:
    git clone https://github.com/coecms/esm-pre-industrial
    mkdir -p ~/access-esm cd ~/access-esm git clone https://github.com/coecms/esm-pre-industrial Cloning into 'esm-pre-industrial'... remote: Enumerating objects: 767, done. remote: Counting objects: 100% (295/295), done. remote: Compressing objects: 100% (138/138), done. remote: Total 767 (delta 173), reused 274 (delta 157), pack-reused 472 Receiving objects: 100% (767/767), 461.57 KiB | 5.24 MiB/s, done. Resolving deltas: 100% (450/450), done. Note: Some modules might interfere with the git commands (for example matlab/R2018a). If you are running into issues during the cloning of the repository, it might be a good idea to run
    module purge
    first, before trying again."},{"location":"models/run-a-model/run-access-esm/#edit-access-esm-configuration","title":"Edit ACCESS-ESM configuration","text":"In order to modify an ACCESS-ESM configuration, it is worth understanding a bit more how its job scheduler payu works."},{"location":"models/run-a-model/run-access-esm/#payu","title":"Payu","text":"Payu is a workflow management tool for running numerical models in supercomputing environments. The general layout of a payu-supported model run consists of two main directories:
    • The laboratory is the directory where all parts of the model are kept. For ACCESS-ESM, it is typically /scratch/$PROJECT/$USER/access-esm.
    • The control directory, where the model configuration is kept and from where the model is run (in our case is the cloned directory ~/access-esm/esm-pre-industrial).
    This distinction of directories keeps the small-size configuration files separated from the larger binary outputs and inputs. In this way, we can place the configuration files in the $HOME directory (being the only filesystem on Gadi that is actively backed up), without overloading it with too much data. Moreover, this separation allows to run multiple self-resubmitting experiments simultaneously that might share common executables and input data. To proceed with the setup of the laboratory directory, from the control directory run:
    payu init
    This will create the laboratory directory, along with other subdirectories (depending on the configuration). The main subdirectories we are interested in are:
    • work \u2192 temporary directory where the model is actually run. It gets cleaned after each run.
    • archive \u2192 directory where the output is placed after each run.
    • cd ~/access-esm/esm-pre-industrial payu init laboratory path: /scratch/$PROJECT/$USER/access-esm binary path: /scratch/$PROJECT/$USER/access-esm/bin input path: /scratch/$PROJECT/$USER/access-esm/input work path: /scratch/$PROJECT/$USER/access-esm/work archive path: /scratch/$PROJECT/$USER/access-esm/archive
    "},{"location":"models/run-a-model/run-access-esm/#edit-the-master-configuration-file","title":"Edit the Master Configuration file","text":"The config.yaml file, located in the control directory, is the Master Configuration file. This file controls the general model configuration and if we open it in a text editor, we can see different parts:
    • PBS resources
      jobname: pre-industrial\n            queue: normal\n            walltime: 20:00:00\n        
      These are settings for the PBS scheduler. Edit lines in this section to change any of the PBS resources. For example, to run ACCESS-ESM under the tm70 project (TO DO add Getting started, join a NCI Project link), add the following line to this section:
      project: tm70
    • Link to the laboratory directory
      # note: if laboratory is relative path, it is relative to /scratch/$PROJECT/$USER\n            laboratory: access-esm\n        
      This will set the laboratory directory. Relative paths are relative to /scratch/$PROJECT/$USER. Absolute paths can be specified as well.
    • Model
      model: access
      The main model. This tells payu which driver to use (access stands for ACCESS-ESM).
    • Submodels
      submodels:\n            \u00a0\u00a0- name: atmosphere\n            \u00a0\u00a0\u00a0\u00a0model: um\n            \u00a0\u00a0\u00a0\u00a0ncpus: 192\n            \u00a0\u00a0\u00a0\u00a0exe: /g/data/access/payu/access-esm/bin/coe/um7.3x\n            \u00a0\u00a0\u00a0\u00a0input:\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/atmosphere\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/start_dump\n            \u00a0\u00a0- name: ocean\n            \u00a0\u00a0\u00a0\u00a0model: mom\n            \u00a0\u00a0\u00a0\u00a0ncpus: 180\n            \u00a0\u00a0\u00a0\u00a0exe: /g/data/access/payu/access-esm/bin/coe/mom5xx\n            \u00a0\u00a0\u00a0\u00a0input:\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/ocean/common\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/ocean/pre-industrial\n            \u00a0\u00a0- name: ice\n            \u00a0\u00a0\u00a0\u00a0model: cice\n            \u00a0\u00a0\u00a0\u00a0ncpus: 12\n            \u00a0\u00a0\u00a0\u00a0exe: /g/data/access/payu/access-esm/bin/coe/cicexx\n            \u00a0\u00a0\u00a0\u00a0input:\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/ice\n            \u00a0\u00a0- name: coupler\n            \u00a0\u00a0\u00a0\u00a0model: oasis\n            \u00a0\u00a0\u00a0\u00a0ncpus: 0\n            \u00a0\u00a0\u00a0\u00a0input:\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/coupler\n        
      ACCESS-ESM is a coupled model, which means it has multiple submodels (i.e. model components). This section specifies the submodels and contains configuration options (for example the directories of input files) that are required to ensure the model can execute correctly. Each submodel also has additional configuration options that are read in when the submodel is running. These specific configuration options are found in the subdirectory of the control directory having the name of the submodel (e.g. in our case the configuration for the atmosphere submodel, i.e. the UM, will be in the directory ~/access-esm/esm-pre-industrial/atmosphere).
    • collate
      collate:\n            \u00a0\u00a0exe: /g/data/access/payu/access-esm/bin/mppnccombine\n            \u00a0\u00a0restart: true\n            \u00a0\u00a0mem: 4GB\n        
      The collate process joins a number of smaller files, which contain different parts of the model grid, together into target output files. The restart files are typically tiled in the same way and will also be joined together if the restart option is set to true.
    • restart
      restart: /g/data/access/payu/access-esm/restart/pre-industrial
      The location of the files used for a warm restart.
    • Start date and internal run length
      calendar:\n            \u00a0\u00a0start:\n            \u00a0\u00a0\u00a0\u00a0year: 101\n            \u00a0\u00a0\u00a0\u00a0month: 1\n            \u00a0\u00a0\u00a0\u00a0days: 1\n            \u00a0\u00a0runtime:\n            \u00a0\u00a0\u00a0\u00a0years: 1\n            \u00a0\u00a0\u00a0\u00a0months: 0\n            \u00a0\u00a0\u00a0\u00a0days: 0\n        
      This section specifies the start date and internal run length. Note: The internal run length (controlled by runtime) can be different from the total run length. Also, the runtime value can be lowered, but should not be increased to a total of more than 1 year, to avoid errors. If you want to know more about the difference between internal run and total run lenghts, or if you want to run the model for more than 1 year, check Run configuration for multiple years.
    • Number of runs per PBS submission
      runspersub: 5
      ACCESS-ESM configurations are often run in multiple steps (or cycles), with payu running a maximum of runspersub internal runs for every PBS job submission. Note: If we increase runspersub, we might need to increase the walltime in the PBS resources.
    To know more about other configuration settings for the config.yaml file, please check how to configure your experiment with payu."},{"location":"models/run-a-model/run-access-esm/#run-access-esm-configuration","title":"Run ACCESS-ESM configuration","text":"

    After editing the configuration, we are ready to run ACCESS-ESM. ACCESS-ESM suites run on Gadi through a PBS job submission managed by payu.

    "},{"location":"models/run-a-model/run-access-esm/#payu-setup-optional","title":"Payu setup (optional)","text":"As a first step, from the control directory, is good practice to run:
    payu setup
    This will prepare the model run, based on the experiment configuration. payu setup laboratory path: /scratch/$PROJECT/$USER/access-esm binary path: /scratch/$PROJECT/$USER/access-esm/bin input path: /scratch/$PROJECT/$USER/access-esm/input work path: /scratch/$PROJECT/$USER/access-esm/work archive path: /scratch/$PROJECT/$USER/access-esm/archive Loading input manifest: manifests/input.yaml Loading restart manifest: manifests/restart.yaml Loading exe manifest: manifests/exe.yaml Setting up atmosphere Setting up ocean Setting up ice Setting up coupler Checking exe and input manifests Updating full hashes for 3 files in manifests/exe.yaml Creating restart manifest Updating full hashes for 30 files in manifests/restart.yaml Writing manifests/restart.yaml Writing manifests/exe.yaml Note: You can skip this step as it is included also in the run command. However, runnning it explicitly helps to check for errors and make sure executable and restart directories are accessible."},{"location":"models/run-a-model/run-access-esm/#run-configuration","title":"Run configuration","text":"To run ACCESS-ESM configuration for one internal run length (controlled by runtime in the config.yaml file), run:
    payu run -f
    This will submit a single job to the queue with a total run length of runtime. It there is no previous run, it will start from the start date indicated in the config.yaml file, otherwise it will perform a warm restart from a precedently saved restart file. Note:The -f option ensures that payu will run even if there is an existing non-empty work directory, which happens if a run crashes. payu run -f Loading input manifest: manifests/input.yaml Loading restart manifest: manifests/restart.yaml Loading exe manifest: manifests/exe.yaml payu: Found modules in /opt/Modules/v4.3.0 qsub -q normal -P <project> -l walltime=11400 -l ncpus=384 -l mem=1536GB -N pre-industrial -l wd -j n -v PAYU_PATH=/g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin,MODULESHOME=/opt/Modules/v4.3.0,MODULES_CMD=/opt/Modules/v4.3.0/libexec/modulecmd.tcl,MODULEPATH=/g/data3/hh5/public/modules:/etc/scl/modulefiles:/opt/Modules/modulefiles:/opt/Modules/v4.3.0/modulefiles:/apps/Modules/modulefiles -W umask=027 -l storage=gdata/access+gdata/hh5 -- /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin/python3.9 /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin/payu-run <job-ID>.gadi-pbs"},{"location":"models/run-a-model/run-access-esm/#run-configuration-for-multiple-years","title":"Run configuration for multiple years","text":"If you want to run ACCESS-ESM configuration for multiple internal run lengths (controlled by runtime in the config.yaml file), you can use the option -n:
    payu run -n <number-of-runs>
    This will run the configuration number-of-runs times with a total run length of runtime * number-of-runs. The number of consecutive PBS jobs submitted to the queue depends on the runspersub value specified in the config.yaml file."},{"location":"models/run-a-model/run-access-esm/#understand-runtime-runspersub-and-n-parameters","title":"Understand runtime, runspersub, and -n parameters","text":"With the correct use of runtime, runspersub, and -n parameters, we can have full control of our run.
    • runtime defines the internal run length.
    • runspersub defines the maximum number of internal runs for every PBS job submission.
    • -n sets the number of internal runs to be performed.
    Let's have some practical examples:
    • Run 20 years of simulation, with resubmission every 5 years To have a total run length of 20 years, with a resubmition cycle of 5 years, we can leave runtime to the default value of 1 year, set runspersub to 5, and run the configuration using -n 20:
      payu run -n 20
      This will submit subsequent jobs for the following years: 1 to 5, 6 to 10, 11 to 15, and 16 to 20. With a total of 4 PBS jobs.
    • Run 7 years of simulation, with resubmission every 3 years To have a total run length of 7 years, with a resubmition cycle of 3 years, we can leave runtime to the default value of 1 year, set runspersub to 3, and run the configuration using -n 7:
      payu run -n 7
      This will submit subsequent jobs for the following years: 1 to 3, 4 to 6, and 7. With a total of 3 PBS jobs.
    • Run 3 months and 10 days of simulation, in one single submission To have a total run length of 3 months and 10 days, all in a single submission, we have to set runtime to:
      years: 0\n            months: 3\n            days: 10\n        
      set runspersub to 1 (or any value > 1), and run the configuration without -n (or with -n equals 1):
      payu run
    • Run 1 year and 4 months of simulation, with resubmission every 4 months To have a total run length of 1 year and 4 months (16 months), we will have to split it into multiple internal runs. For example, we can have 4 internal runs of 4 months each. Therefore, we will have to set runtime to:
      years: 0\n            months: 4\n            days: 0\n        
      Since the internal run length is set to 4 months, to resubmit our jobs every 4 months (i.e. every internal run), we have to set runspersub to 1. Finally, we will perform 4 internal runs by running the configuration with -n 4:
      payu run -n 4
    "},{"location":"models/run-a-model/run-access-esm/#monitor-runs","title":"Monitor runs","text":"Currently, there is no specific tool to monitor ACCESS-ESM runs. One way to check the status of our run is running:
    qstat -u $USER
    This will show the status of all your PBS jobs (if there is any PBS job submitted): qstat -u $USER Job id\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Name\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0User\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Time Use\u00a0S Queue --------------------- ---------------- ---------------- -------- - ----- <job-ID>.gadi-pbs\u00a0\u00a0\u00a0\u00a0\u00a0pre-industrial\u00a0\u00a0\u00a0<$USER>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<time>\u00a0R\u00a0normal-exec <job-ID>.gadi-pbs\u00a0\u00a0\u00a0\u00a0\u00a0<other-job-name>\u00a0<$USER>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<time>\u00a0R\u00a0normal-exec <job-ID>.gadi-pbs\u00a0\u00a0\u00a0\u00a0\u00a0<other-job-name>\u00a0<$USER>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<time>\u00a0R\u00a0normal-exec If you changed the jobname in the PBS resources of the Master Configuration file, that will be your job's Name instead of pre-industrial. S indicates the status of your run:
    • Q \u2192 Job waiting in the queue to start
    • R \u2192 Job running
    • E \u2192 Job ending
    If there is no listed job with your jobname (or if there is no job submitted at all), your run might have successfully completed, or might have been terminated due to an error."},{"location":"models/run-a-model/run-access-esm/#check-the-output-and-error-log-files","title":"Check the output and error log files","text":"While the model is running, payu saves the standard output and standard error into the files access.out and access.err in the control directory. You can examine these files, as the run progresses, to check on it's status. After the model has completed its run, or if it crashed, the output and error log files, respectively, are renamed by default into jobname.o<job-ID> and jobname.e<job-ID>."},{"location":"models/run-a-model/run-access-esm/#model-outputs","title":"Model outputs","text":"While the configuration is running, output files (as well as restart files) are moved from the work directory to the archive directory, under /scratch/$PROJECT/$USER/access-esm/archive (also symlinked in the control directory under ~/access-esm/esm-pre-industrial/archive). Both outputs and restarts are stored into subfolders for each different configuration (esm-pre-industrial in our case), and inside the configuration folder, they are subdivided for each internal run. The format of a typical output folder is outputXXX, whereas the typical restart folder is usually formatted as restartXXX, with XXX being the number of internal run, starting from 000. In the respective folders, outputs and restarts are separated for each model component. For the atmospheric output data, each file it is usually a UM fieldsfile, formatted as <UM-suite-identifier>a.p<output-stream-identifier><time-identifier>. cd /scratch/$PROJECT/$USER/access-esm/archive/esm-pre-industrial ls output000 pbs_logs restart000 ls output000/atmosphere aiihca.daa1210 aiihca.daa1810 aiihca.paa1apr aiihca.paa1jun aiihca.pea1apr aiihca.pea1jun aiihca.pga1apr aiihca.pga1jun atm.fort6.pe0 exstat ihist prefix.CNTLGEN UAFLDS_A aiihca.daa1310 aiihca.daa1910 aiihca.paa1aug aiihca.paa1mar aiihca.pea1aug aiihca.pea1mar aiihca.pga1aug aiihca.pga1mar cable.nml fort.57 INITHIS prefix.PRESM_A um_env.py aiihca.daa1410 aiihca.daa1a10 aiihca.paa1dec aiihca.paa1may aiihca.pea1dec aiihca.pea1may aiihca.pga1dec aiihca.pga1may CNTLALL ftxx input_atm.nml SIZES xhist aiihca.daa1510 aiihca.daa1b10 aiihca.paa1feb aiihca.paa1nov aiihca.pea1feb aiihca.pea1nov aiihca.pga1feb aiihca.pga1nov CONTCNTL ftxx.new namelists STASHC aiihca.daa1610 aiihca.daa1c10 aiihca.paa1jan aiihca.paa1oct aiihca.pea1jan aiihca.pea1oct aiihca.pga1jan aiihca.pga1oct debug.root.01 ftxx.vars nout.000000 thist aiihca.daa1710 aiihca.daa2110 aiihca.paa1jul aiihca.paa1sep aiihca.pea1jul aiihca.pea1sep aiihca.pga1jul aiihca.pga1sep errflag hnlist prefix.CNTLATM UAFILES_A References
    • https://github.com/coecms/esm-pre-industrial
    • https://payu.readthedocs.io/en/latest/usage.html
    "},{"location":"models/run-a-model/run-access-om/","title":"Running ACCESS-OM2 Model","text":"

    Welcome to ACCESS-OM2 \u2014 a coupled ocean-ice model and collection of configurations developed by the Consortium for Ocean-Sea Ice Modelling in Australia (COSIMA).

    ACCESS-OM2 consists of the MOM 5.1 ocean model, CICE 5.1.2 sea ice model, and a file-based atmosphere called YATM coupled together using the OASIS3-MCT v2.0 coupler. Regridding is done using ESMF and KDTREE2.

    The configurations available here are updated from the version 1.0 configurations described in Kiss et al. (2020). Further details are given in the ACCESS-OM2 technical report.

    "},{"location":"models/run-a-model/run-access-om/#how-to-access-existing-access-om2-output","title":"How to access existing ACCESS-OM2 output","text":"

    NCI users can access model output via the COSIMA Cookbook. A good place to start is the data explorer, which will give an overview of the data available. Also see this overview of 0.1\u00b0 IAF outputs.

    Non-NCI users can access a subset of the ACCESS-OM2 output via the COSIMA Model Output Collection.

    "},{"location":"models/run-a-model/run-access-om/#how-to-run-access-om2","title":"How to run ACCESS-OM2","text":"

    Start by reading the [[Quick start|Getting-started#quick-start]] guide. If you are using gadi.nci.org.au at the NCI National Facility and are happy to use our pre-compiled executables then this should be all you need. The page also provides instructions for building your own executables.

    NOTE: All ACCESS-OM2 model components and configurations are undergoing continual improvement. We strongly recommend that you \"watch\" this repo (see button at top of screen; ask to be notified of all conversations) and also watch all the component models, whichever configuration(s) you are using, and payu to be kept informed of updates, problems and bug fixes as they arise.

    "},{"location":"models/run-a-model/run-access-om/#getting-help-and-reporting-issues","title":"Getting help and reporting issues","text":"

    For all help requests and error reports please create a \"GitHub issue\" at ACCESS-OM2 issues.

    "},{"location":"models/run-a-model/run-access-om/#for-self-help","title":"For self-help","text":"

    Setting up and running the model is primarily supported via the [[ACCESS-OM2 wiki|Home]] (that you are already reading). It is a \"wiki\" so feel free to correct and contribute.

    "},{"location":"models/run-a-model/run-access-om/#how-to-update-this-wiki","title":"How to update this wiki","text":"

    The wiki attached to a public repository can be edited by anyone. Just navigate to the page you wish to edit and click on the 'edit' button on the top right hand side.

    "},{"location":"models/run-a-model/run-access-om/#references","title":"References","text":""},{"location":"models/run-a-model/getting_started/","title":"Getting Started to Run a Model","text":"

    If Model, Model Component or Model Configuration are not familiar terms for you, please check out our Model overview.

    If you have not run a model before, our Getting Started Guide will give you the basics to access the Model infrastructure on the high-performance-computing facility Gadi@NCI.

    Detailed guides for the different Model configurations can then be found on the following pages: - Run ACCESS-ESM for the ACCESS Earth System Model configurations - Run ACCESS-CM for the ACCESS Coupled Model configurations - Run ACCESS-AM for the ACCESS Atmosphere Model configurations - Run ACCESS-OM for the ACCESS Ocean Model configurations

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/","title":"Getting Started: Computing Access (Gadi@NCI)","text":"

    Here, we provide you the important information to give you access to the large data that we curate at NCI's storage:

    1) Get an NCI Account 2) Join relevant NCI projects 3) Logging in to Gadi@NCI 4) Computing on Gadi

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#1-nci-account","title":"1) NCI Account","text":"

    To be able to work with our data, you need an NCI account.

    If you don't have one yet, signup here.

    Note: You will need an institutional email address with an organisation that allows access to NCI (e.g., CSIRO, a university, etc.).

    Once you have signed up, you will be allocated a username. We will refer to this username (e.g. kf1234) as $USER.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#2-join-relevant-nci-projects","title":"2) Join relevant NCI projects","text":"

    There is a plethora of NCI projects that may or may not be relevant for you.

    We recommend you have a chat with your supervisor to identify the relevant projects, but in any case suggest to join xp65 for MED code as well as kj13 for MED data.

    To get this conversation started, we list some possibly relevant projects below:

    Project Description with link, * indicated compute resource ACCESS-NRI projects tm70 ACCESS-NRI Working Project * iq82 ACCESS-NRI MED Compute * kj13 ACCESS-NRI MED Data Dev ct11 ACCESS-NRI Replicated Datasets xp65 ACCESS-NRI Analysis Environments ACCESS projects access ACCESS software sharing p66 ACCESS - AOGCM / suppport development of the ACCESS modelling system * p73 ACCESS Model Output Archive (AOGCM) Data projects hh5 Climate-LIEF Data Storage ub7 Seasonal Prediction ACCESS-S1 Hindcast ux62 Seasonal Prediction ACCESS-S2 Hindcast cb20 ESGF CMIP3 Replication Data al33 ESGF CMIP5 Replication Data rr3 ESGF CMIP5 Australian Data Publication oi10 ESGF CMIP6 Replication Data fs38 ESGF CMIP6 Australian Data Publication rt52 ERA5 Replicated Data: Single and pressure-levels data uc16 ERA5 Replicated Datasets on Potential Temperature & Vorticity Levels zz93 ERA5-Land Replicated Data zv2 Australian Gridded Climate Data (AGCD) Collection qv56 Reference Datasets for Climate Model Analysis/Forcing cj50 COSIMA Model Output Collection Other projects ik11 COSIMA shared working space v45 Ocean Extremes * ga6 Modelling the formation of sedimentary basins and continental margins * m18 Evolution and dynamics of the Australian lithosphere * q97 Earth dynamics and resources over the last billion years * qu79 Collaborative REAnalysis Technical Environment Intercomparison Project (CREATE-IP)

    To join a project or find more projects, please use this NCI website.

    The first project that you join will become your default login project, e.g. xp65. We will refer to it as $PROJECT and we show you how to change it below.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#3-logging-in-to-gadinci","title":"3) Logging in to Gadi@NCI","text":"

    If you have never logged onto Gadi before, we recommend to take a look at NCI's Welcome to Gadi website. It provides all the important commands and information for logging properly onto Gadi, like the following: \"To run jobs on Gadi, you need to first log in to the system. Users on Mac/Linux can use the built-in terminal. For Windows users, we recommend using MobaXterm as the local terminal. Logging in to Gadi happens through a Gadi login node.\"

    When you login, via the command

    ssh -Y $USER@gadi.nci.org.au\n
    you will enter your $HOME directory with your default $PROJECT and your default SHELL. Both are saved at $HOME/.config/gadi-login.conf and you can print them via
    cat $HOME/.config/gadi-login.conf\n

    The -Y option is needed to run graphical tools by enabling the forwarding of trusted X protocol mesgs between X-Server on local system and X programs on Gadi. You need to enable X Windowing system on your local system before running ssh. This can be done by running X-Server like XQuartz (Mac), MobaXterm (MS Windows), startx or similar (Linux).

    Again, for more useful information we recommend to check out NCI's Welcome to Gadi website.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#4-computing-on-gadi","title":"4) Computing on Gadi","text":""},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#gadi-resources","title":"Gadi Resources","text":"

    Coupled climate models like ACCESS-CM involve, among other things, calculation of complex mathematical equations that explain the physics of the atmosphere and oceans. Performed at hundreds of millions of points around the Earth, these calculations require vast computing power to complete them in a reasonable amount of time, thus relying on the power of high-performance computing (HPC) like Gadi. The Gadi supercomputer can handle more than 10 million billion (10 quadrillion) calculations per second and is connected to 100,000 Terabytes of high-performance research data storage.

    An overview of Gadi resources such as compute, storage and PBS jobs are described below.

    Useful NCI commands to check your available compute resources are:

    Command Purpose logout or Ctrl+D To exit a session hostname Displays login node details module list Modules currently loaded module avail Available modules nci_account -P [proj] Compute allocation for [proj] nqstat -P [proj] Jobs running/queued in [proj] lquota Storage allocation and usage for all your projects"},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#compute-hours","title":"Compute Hours","text":"

    Compute allocations are granted to projects instead of directly to users and, hence, you need to be a member of a project in order to use its compute allocation. To run jobs on Gadi, you need to have sufficient allocated compute hours available, where the job cost depends on the resources reserved for the job and the amount of walltime it uses.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#storage","title":"Storage","text":"

    Each user has a project-independent $HOME directory, which has a storage limit of 10 GiB. All data on /home is backed up.

    Through project membership, the user gets access to the storage space within the project folders /scratch and /g/data filesystems for that particular project.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#pbs-jobs","title":"PBS Jobs","text":"

    To run compute tasks such as an ACCESS-CM suite on Gadi, users need to submit them as jobs to queues. Within a job submission, you can specify the queue, duration and computational resources needed for your job. When a job submission is accepted, it is assigned a jobID (shown in the return message) that can then be used to monitor the job\u2019s status.

    On job completion, contents of the job\u2019s standard output/error stream gets copied to a file in the working directory with the respective format: <jobname>.o<jobid> and <jobname>.e<jobid>. Users should check these two log files before proceeding with post-processing of any output from their corresponding job.

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Index","text":""},{"location":"#welcome-to-access-hive","title":"Welcome to ACCESS-Hive","text":"ACCESS-Hive is a portal to all documentation relevant to the Australian Community Climate and Earth System Simulator, ACCESS, and the wider ACCESS community. ACCESS-Hive is developed for and by the ACCESS community following an open-source development model."},{"location":"#navigating-access-hive","title":"Navigating ACCESS-Hive","text":"Models Run a Model Model Evaluation Community Resources Community Forum"},{"location":"#about","title":"About","text":"The documentation on Hive is work in progress!

    The ACCESS-Hive is a community resource that is a work in progress. We\u2019d love to receive your contribution. Please see the contributing guidelines below for how to make contributions to the Hive page content. You can also open an issue highlighting any content you\u2019d like us to provide but aren\u2019t able to contribute yourself.

    "},{"location":"#support","title":"Support","text":"

    There is a system of tags to identify who supports the linked documentation or software, and the level of support you can expect:

    • Supported by ACCESS-NRI {{ supported }}

    • Recommended by ACCESS-NRI {{ recommended }}

    • Community contributed {{ community }}

    See the support page for details about the support levels: what is supported, by who, and how to access help.

    "},{"location":"#contribute-to-access-hive-1","title":"Contribute to ACCESS-Hive 1","text":"Contribute Join the ACCESS-Hive team and have your contributions onboard!"},{"location":"#acknowledgement-of-country","title":"Acknowledgement of Country","text":"

    We at ACCESS-NRI acknowledge the Traditional Owners of the land on which our research infrastructure and community operate across Australia and pay our respects to Elders past and present. We recognise the thousands of years of accumulated knowledge and deep connection they have with all the Earth systems we simulate.2

    1. Image by pch.vector on Freepik\u00a0\u21a9

    2. Photo by Ren\u00e9 Riegal on Unsplash \u21a9

    "},{"location":"call_contribute/","title":"Call contribute","text":"The documentation on Hive is work in progress!

    The ACCESS-Hive is a community resource that is a work in progress. We\u2019d love to receive your contribution. Please see the contributing guidelines below for how to make contributions to the Hive page content. You can also open an issue highlighting any content you\u2019d like us to provide but aren\u2019t able to contribute yourself.

    "},{"location":"about/License/","title":"License","text":"

    The ACCESS-Hive is made available under the Creative Commons Attribution license. The following is a human-readable summary of (and not a substitute for) the full legal text of the CC BY 4.0 license.

    You are free:

    • to Share---copy and redistribute the material in any medium or format
    • to Adapt---remix, transform, and build upon the material

    for any purpose, even commercially.

    The licensor cannot revoke these freedoms as long as you follow the license terms.

    Under the following terms:

    • Attribution---You must give appropriate credit (mentioning that your work is derived from work that is Copyright \u00a9 ACCESS-NRI and, where practical, linking to https://www.access-nri.org.au/), provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

    • No additional restrictions---You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. With the understanding that:

    Notices:

    • You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.
    • No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
    "},{"location":"about/code_of_conduct/","title":"Code of Conduct","text":"

    ACCESS-Hive is an open community supported effort. For it to be successful it must be a welcoming and inclusive space so that everyone in the community feels able to contribute.

    To ensure this is the case users and contributors to ACCESS-Hive and the ACCESS-Hive Forum are required to abide by the ACCESS-NRI code of conduct.

    "},{"location":"about/contact/","title":"Contact","text":"

    ACCESS-Hive is an initiative of the The Australian Earth-System Simulator (ACCESS-NRI).

    ACCESS-Hive is an open community supported effort. The underpinning infrastructure is provided by ACCESS-NRI but much of the content is provided by the ACCESS Community.

    If there are problems or queries about the content of ACCESS-Hive check if there is already a relevant open issue on the ACCESS-Hive GitHub repository, and open one if not.

    Check the support page for information on what is supported and by whom.

    Join the ACCESS-Hive forum and find previous related discussions about the hive or the resources listed here, or start your own and make contacts with your community.

    Otherwise, contact ACCESS-NRI directly. Full contact details for ACCESS-NRI are available on the ACCESS-NRI website contact page

    "},{"location":"about/contact/#other-places-where-you-can-find-the-access-nri-team","title":"Other places where you can find the ACCESS-NRI team:","text":"

    : ACCESS-Hive Forum

    : ACCESS-NRI GitHub

    : @ACCESS_NRI twitter

    : access_nri LinkedIn

    : access-nri.slack.com

    "},{"location":"about/policies/","title":"Policies {{ supported }}","text":"
    • Procedures and Practices: Contains documents outlining how ACCESS-NRI will function. These documents describe what users can expect and justifications for the decisions against criteria that are based on the values of the organisation.
    "},{"location":"about/support/","title":"Support","text":""},{"location":"about/support/#support-levels","title":"Support levels","text":"

    The site uses a system of tags to identify who supports the linked documentation or software, and the level of support you can expect:

    Supported by ACCESS-NRI {{ supported }}

    Documentation that is actively maintained and supported by ACCESS-NRI. This is documentation that was either created by ACCESS-NRI, or it is existing documentation for which ACCESS-NRI has taken over responsibility.

    Recommended by ACCESS-NRI {{ recommended }}

    Documentation for third-party software that ACCESS-NRI recommends and facilitates the use of at NCI as a service to the community. This means ACCESS-NRI supports the infrastructure required to run the software, but not necessarily the software itself.

    Community contributed {{ community }}

    Documentation that is of use to the community, but is not explicitly endorsed or supported by ACCESS-NRI.

    "},{"location":"about/support/#how-to-get-help","title":"How to get help","text":"

    Each entry on ACCESS-Hive links to another web site. There should be information on how to get help on the linked site. If there are no obvious channels for help, or the help is not adequate consider asking for assistance from fellow members of your community on the ACCESS-Hive forum.

    In the case of ACCESS-NRI supported documentation and software, marked {{ supported }}, if there is no information on how to get help, or your query is not appropriate for the support channels provided, please either ask on the ACCESS-Hive forum or contact ACCESS-NRI directly.

    "},{"location":"community_resources/","title":"Community Resources","text":"

    In this area of the Hive, we collect content that is not currated by us, but may be helpful for the community. You can contribute to this part of the Hive too!

    Currently, we provide pointers to the following categories: - Working Groups - Glossaries - Variables - Model Evaluation Links - Training - Events

    "},{"location":"community_resources/community_data_processing/","title":"Community Processing Data Processing Tools","text":""},{"location":"community_resources/community_data_processing/#tools","title":"Tools","text":""},{"location":"community_resources/community_data_processing/#kerchunk-community","title":"Kerchunk {{ community }}","text":"

    Documentation | Sources

    Kerchunk is a library that provides a unified way to represent a variety of chunked, compressed data formats (e.g. NetCDF/HDF5, GRIB2, TIFF, \u2026), allowing efficient access to the data from traditional file systems or cloud object storage. It also provides a flexible way to create virtual datasets from multiple files.

    "},{"location":"community_resources/community_data_processing/#cmor3-community","title":"CMOR3 {{ community }}","text":"

    Climate Model Output Rewriter Version 3

    Documentation | Sources

    CMOR is used to produce CF-compliant netCDF files. The structure of the files created by CMOR and the metadata they contain fulfill the requirements of many of the climate community\u2019s standard model experiments (which are referred to here as \u201cMIPs\u201d and include, for example, AMIP, PMIP, APE, and IPCC scenario runs).

    "},{"location":"community_resources/community_data_processing/#xmip-community","title":"xMIP {{ community }}","text":"

    Documentation | Tutorial on NCI | Sources

    This package facilitates the cleaning, organization and interactive analysis of Model Intercomparison Projects (MIPs) within the Pangeo software stack.

    "},{"location":"community_resources/community_data_processing/#app4-the-access-post-processor-community","title":"APP4 (The ACCESS Post Processor) {{ community }}","text":"

    Documentation | Sources

    The APP4 is a CMORisation tool designed to convert ACCESS model output to ESGF-compliant formats, primarily for publication to CMIP6. The code was originally built for CMIP5, and was further developed for CMIP6-era activities. Uses CMOR3 and files created with the CMIP6 data request to generate CF-compliant files according to the CMIP6 data standards.

    "},{"location":"community_resources/community_data_processing/#access-archiver-community","title":"ACCESS-Archiver {{ community }}","text":"

    Documentation | Sources

    The ACCESS Archiver is designed to archive model output from ACCESS simulations. It's focus is to copy ACCESS model output from its initial location to a secondary location (typically from /scratch to /g/data), while converting UM files to netCDF, compressing MOM/CICE files, and culling restart files to 10-yearly. Saves 50-80% of storage space due to conversion and compression.

    "},{"location":"community_resources/community_data_processing/#synda-recommended","title":"Synda {{ recommended }}","text":"

    synda is a command line tool to search and download files from the Earth System Grid Federation (ESGF) archive.

    "},{"location":"community_resources/community_data_processing/#fluxnetlsm-community","title":"FluxnetLSM {{ community }}","text":"

    Citation 1 | Sources

    R package for post-processing FLUXNET datasets for use in land surface modelling. Performs quality control and data conversion of FLUXNET data and collated site metadata. Supports FLUXNET2015, La Thuile, OzFlux and ICOS data releases.

    "},{"location":"community_resources/community_data_processing/#metpy-community","title":"Metpy {{ community }}","text":"

    https://unidata.github.io/MetPy/latest/examples/formats/index.html

    Documentation | Sources

    MetPy is a collection of tools in Python for reading, visualizing, and performing calculations with weather data. MetPy supports Python >= 3.8 and is freely available under a permissive open source license.

    Format types are: GINI Water Vapor Imagery, NEXRAD Level 3 File, and NEXRAD Level 2 File.

    "},{"location":"community_resources/community_data_processing/#xskillscore-community","title":"xskillscore {{ community }}","text":"

    Documentation | Sources

    xskillscore is a Python library for computing a wide variety of skill metrics. Its typical application is to verify deterministic and probabilistic forecasts relative to observations.

    "},{"location":"community_resources/community_data_processing/#analysis-blogposts-and-tutorials-community","title":"Analysis blogposts and tutorials {{ community }}","text":"

    Accessing NetCDF and GRIB file collections as cloud-native virtual datasets using Kerchunk, Peter March, Sep 2022

    1. A. M. Ukkola, N. Haughton, M. G. De Kauwe, G. Abramowitz, and A. J. Pitman. Fluxnetlsm r package (v1.0): a community tool for processing fluxnet data for use in land surface modelling. Geoscientific Model Development, 10(9):3379\u20133390, 2017. URL: https://gmd.copernicus.org/articles/10/3379/2017/, doi:10.5194/gmd-10-3379-2017.\u00a0\u21a9

    "},{"location":"community_resources/community_med_recipes/","title":"Community Model Evaluation and Diagnostics (MED) Recipe Gallery","text":"

    We are trying to ingest more and more model evaluation and diagnostics recipes in your currated recipe gallery on this website {{ supported }}. While this is a continous effort, this site is intented for a list of model evaluation and diagnostics recipes that are not (yet) ingested but may be interesting for the community {{ community }}:

    MED Recipe Components Description ESMValTool {{ recommended }} (Earth System Model EValuation Tool) Documentation | Tutorial | Source Code COSIMA Cookbook / Recipes {{ recommended }}(Consortium for Ocean-Sea Ice Modelling in Australia) Documentation | Tutorial | Source Code | Recipes iLAMB {{ recommended }}(international Land Model Benchmarking) Documentation | Tutorial | Source Code iOMB {{ recommended }}(international Ocean Model Benchmarking) Documentation | Tutorial | Source Code METPlus {{ recommended }}(Model Evaluation Tool Plus) Tutorial | Paper PMP {{ recommended }}(PCMDI Metrics Package) Documentation | Source Code climpred {{ community }} Documentation | Tutorial | Source Code | Paper FREVA {{ community }}(Free Evaluation System Framework) Documentation | Source Code TECA {{ community }}(Toolkit for Extremes Climate Analysis) Documentation | Tutorial | Source Code MONET {{ community }}(Model and ObservatioN Evaluation Toolkit) Documentation | Tutorial | Source Code | Paper LIVVkit {{ community }}(land ice verification & validation toolkit) Documentation | Tutorial | Source Code CSET {{ community }}(Convective Scale Evaluation Tool ) Documentation | Tutorial | Source Code | MetPy {{ community }}(Model Evaluation Tool Plus) Tutorial | Source Code | Recipes MetPy is a collection of tools in Python for reading, visualizing, and performing calculations with weather data. MetPy supports Python >= 3.8 and is freely available under a permissive open source license. Afterburner {{ community }} Documentation | Source"},{"location":"community_resources/community_model_catalogs/","title":"Community Model Data Catalogs","text":"

    We are trying to ingest more and more model data catalogs in your currated catalog on this website}. While this is a continous effort, this site is intented for a list of additional model data catalogs that are not (yet) ingested but are recommended by us ({{ recommended }}) or may be interesting for the community ({{ community }}):

    Model Catalog Comp. Description NCI datasets {{ recommended }} NCI has an extensive catalog of datasets of interest to the weather and climate community. These datasets are directly available on the NCI supercomputer and the [Australian Research Environment](https://opus.nci.org.au/display/Help/ARE+User+Guide) CLEX NCI Data Collection Intake Catalogue {{ recommended }} This is an Intake catalogue maintained by the ARC Centre of Excellence for Climate Extremes [(CLEX)](https://climateextremes.org.au/). Only datasets from the NCI Catalog are referenced. The catalogue is available in intake's default catalogue list in the CLEX Conda environment. Two notebooks are provided in the docs folder showing how to access the ERA5 and CIP6 datasets. Australia Climate Data Guide Catalogue {{ recommended }} *A one-stop catalogue to discover Climate Data in Australia* The ACDG portal is a metadata portal listing climate research resources available in Australia from multiple data repositories. This is a community based project managed by the ACDG Single Access working group. This is a group of Australian climate community self-nominated representatives. Anyone is welcome to join the group or to contribute independently to the metadata portal the group is developing. Australian Ocean Data Network {{ recommended }} The Australian Ocean Data Network (AODN) is an interoperable online network of marine and climate data resources. IMOS and the 6 Australian Commonwealth agencies ([see AODN Partners](https://imos.org.au/facilities/aodn/aodn-data-management/aodn-partners)) form the core of the AODN. Increasingly, though, universities and State government offices are offering up data resources to the AODN, and delivery of data to the AODN is being written in to significant research programs e.g. [National Environmental Science Program Marine Biodiversity Hub](http://www.nespmarine.edu.au/) and the [Great Australian Bight research program](http://www.misa.net.au/GAB). Intake-Ilamb Catalog {{ supported }} The Intake-Ilamb catalog provides an yaml-style intake catalog of the reference data used for ESM model benchmarking in the International Land Model Benchmarking [(ILAMB)](https://www.ilamb.org/) effort. FLUXNET {{ community }} FLUXNET is an international \u201cnetwork of networks,\u201d tying together regional networks of earth system scientists. FLUXNET scientists use the eddy covariance technique to measure the cycling of carbon, water, and energy between the biosphere and atmosphere. Scientists use these data to better understand ecosystem functioning, and to detect trends in climate, greenhouse gases, and air pollution. CEDA Archive {{ community }} The CEDA Archive forms part of NERC's Environmental Data Service (EDS) and is responsible for looking after data from atmospheric and earth observation research. They host over 18 Petabytes of data from climate models, satellites, aircraft, met observations, and other sources. OZFlux {{ community }} OzFlux is an ecosystem research network set up to provide Australian, New Zealand and global ecosystem modelling communities with consistent observations of energy, carbon and water exchange between the atmosphere and key Australian and New Zealand ecosystems. Australian Community Reference Climate Data Collection {{ recommended }} This collection is a collaborative effort between the Australian Climate Service (ACS), ARC Centre of Excellence for Climate Extremes (CLEX) and the wider Australian climate research community to re-establish and maintain a reference dataset collection at NCI. An [intake-esm catalogue](https://github.com/aus-ref-clim-data-nci/acs-replica-intake) is also available to facilitate data access."},{"location":"community_resources/community_observational_catalogs/","title":"Community Observational Data Catalogs","text":"

    We are trying to ingest more and more model data catalogs in your currated catalog on this website}. While this is a continous effort, this site is intented for a list of additional model data catalogs that are not (yet) ingested but are recommended by us ({{ recommended }}) or may be interesting for the community ({{ community }}):

    Data Catalog Comp. Description Copernicus Climate Change Service (C3S) Data Store (CDS) {{ recommended }} The Copernicus Climate Change Service (C3S) combines observations of the climate system with the latest science to develop authoritative, quality-assured information about the past, current and future states of the climate in Europe and worldwide. C3S data is provided via its Climate Data Store (CDS). You can search its available datasets via this interface. You can use the CDS API as well as command line tools to download data. To download ERA5 from CDS, you can use for example this era5cli command line tool. Catalogue Search at CEDA (Centre for Environmental Data Analysis) {{ recommended }} The CEDA (Centre for Environmental Data Analysis) Archive hosts atmospheric and earth observation data. It provids an interactive Catalogue Search and Tools for downloading data. It holds environmental data related to atmospheric and earth observation fields. Our remit covers the following areas (see linked examples to some of our most popular datasets): - Climate - e.g. HadUK Grid, CMIP, CRU - Composition - e.g. CCI - Observations - e.g. MIDAS Open - Numerical weather prediction - e.g. Met Office NWP - Airborne - e.g. FAAM - Satellite data and imagery - e.g. Sentinel"},{"location":"community_resources/community_working_groups/","title":"Community Working Groups","text":"

    The ACCESS Community and the ACCESS-NRI have established Community Working Groups to assess and prioritise the needs of the modelling community as well as encourage collaboration within. These working groups are open to the community and welcome new members.

    The working group activities are coordinated through the ACCESS Hive Community Forum:

    Atmospheric Modelling Land Surface Modelling COSIMA(Ocean and Sea-Ice) Forecasting and Prediction Earth System Modelling Cryosphere

    To join a working group follow the instructions on the ACCESS-NRI website, and to participate in the activities of the working group visit the ACCESS-hive forum.

    "},{"location":"community_resources/events/","title":"Workshops and Conferences","text":"

    {{ events_content }}

    "},{"location":"community_resources/events/add_event/","title":"Workshops and Conferences: Add Event","text":"

    We encourage members of the community to list any workshops, tutorials, conferences that might be of interest to the community.

    "},{"location":"community_resources/events/add_event/#how-to-add-your-event","title":"How to add your event","text":""},{"location":"community_resources/events/add_event/#add-an-issue","title":"Add an issue","text":"

    The easiest way for you to add your event is to make an issue with the template provided. This provides a form which guides you through the process of providing the required information.

    "},{"location":"community_resources/events/add_event/#create-a-pull-request-to-add-your-event","title":"Create a pull-request to add your event","text":"

    This process requires some knowledge of git, GitHub and Markdown. If you do not feel comfortable doing this then it is sufficient to just add an issue as above. The issue will be assigned to someone else to finish.

    If you do feel confident adding your event to the list, then create a Markdown text file, identified with the .md extension, to the correct subdirectory in the events folder of the ACCESS-Hive repository. The subdirectories are named by year, put your new file in the year in which the event will take place. Avoid spaces in your filename: use an underscore _ where you would normally have a space. e.g. regional_dowscaling_cordex.md

    The file must contain a header with the metadata as in the example below:

    ---\ntitle: Regional climate downscaling for Australia within the CORDEX framework\nstart_date: 27/11/2022\nend_date: 27/11/2022\nlocation: Adelaide, SA\nlink: https://www.amos2022.org.au/\ndescription: This workshop is relevant for those performing regional climate simulations or downscaling with empirical/statistical downscaling approaches including machine learning, as well as those using regional climate projection data in their work. The focus will be on CORDEX related data and modelling. The workshop will have some presentations with extended discussion.\n---\n

    Make sure to follow all the steps described in the contribution guidelines to submit this addition for approval for publication.

    "},{"location":"community_resources/events/events/2022/CORDEXAmos2022/","title":"Regional climate downscaling for Australia within the CORDEX framework","text":"

    This workshop is relevant for those performing regional climate simulations or downscaling with empirical/statistical downscaling approaches including machine learning, as well as those using regional climate projection data in their work. The focus will be on CORDEX related data and modelling. The workshop will have some presentations with extended discussion. Some topics to be covered include: - Accessing the existing CORDEX-CMIP5 data. How to access and use the data - Explain the CORDEX-CMIP6 protocol - What does it say? How can you contribute? - Who is planning to contribute (or is already working on contributions) to the Australasia domain?

    "},{"location":"community_resources/events/events/2022/GC5Workshop/","title":"GC5 Assessment Workshop","text":"

    The UM Partnership Team and GC Programme Team are running a GC5 assessment workshop, to assess the latest configuration of the Global Coupled model.

    The workshop will be a hybrid event with an option to attend online or in-person at Met Office Collaboration Building, Exeter. We will discuss the assessment of the latest GC5 configuration in a range of model simulations in a seamless context and sessions will broadly consist of: - Summary of GC5 physics changes - General model assessment - Summary from Priority Evaluation Groups (PEGs) - Summary from Collaboration Groups (CoGs) - Upcoming changes in GC science and tools - Discussions

    Please fill in the registration form to confirm attendance by 21st October

    For any further questions please contact Luke Roberts, Prince Xavier or Charline Marzin at the Met Office.

    "},{"location":"community_resources/events/events/2022/GroundTruthingClimateChange/","title":"Ground truthing future climate change","text":"

    Scientific ocean drilling provides the robust baseline data on global climate evolution over extended geologic time periods that are critical for improving climate model performance. By targeting how the climate system operates across a wide array of past climate states, scientific ocean drilling has, and continues to, obtain the data necessary to calibrate and improve numerical models used to project future climate impacts and inform mitigation strategies.

    Join us in this session where we aim to connect climate and ocean modellers to our rich (50+ years of drilling) database and unanswered questions in scientific ocean drilling. By addressing key questions about Earth\u2019s past, present, and future through interdisciplinary research, we are aiming to spark new collaborations and proposals that will lead to a more profound understanding of Earth as one integrated, interconnected system.

    "},{"location":"community_resources/events/events/2022/SWOTAmos2022/","title":"The Surface Water and Ocean Topography (SWOT) satellite: A primer","text":"

    The Surface Water and Ocean Topography (SWOT) satellite, which will launch in November 2022, is a ground-breaking wide-swath altimetry mission that will observe fine details of the ocean dynamics at a resolution 10 times finer than current satellites. SWOT is jointly developed by NASA and CNES with contributions from researchers around the world, including Australia. The Australian government, the Integrated Marine Observing System, and the Australian marine science community are investing in SWOT through calibration/validation and synergistic in situ measurements of fine-scale ocean dynamics in the Australian region. This workshop will present a primer on the principles of the satellite and instrument, how it works, and what are its possibilities and limitations compared to existing altimetry products. This will be complemented with a brief summary of ocean research related to and enabled by SWOT, including internal waves and tides, sub-mesoscale dynamics, the geoid, and mean dynamic topography. The goal of the workshop is to provide oceanographers, hydrologists and other users of altimetry data with the information they need to prepare for the arrival of SWOT data in late 2023.

    "},{"location":"community_resources/glossaries/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    {{ community }} CMIP(6):

    {{ community }} ERA(5):

    {{ community }} UM:

    "},{"location":"community_resources/glossaries/glossary_access_nri/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_cmip/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_cssr/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_era/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_ipcc/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/glossaries/glossary_um/","title":"Glossary and Useful Terms","text":"

    Here you can find a compilation of common terms and acronyms used by the Australian Community Climate and Earth System Simulator (ACCESS) community:

    {{ supported }} ACCESS-NRI Glossary: This includes common terms and acronyms used by the ACCESS community in research and modelling of past, present and future climate, weather and Earth-Systems.

    {{ community }} IPCC Acronyms: Contains a comprehensive list of acronyms published in the scientific report SR15 of the Intergovernmental Panel on Climate Change (IPCC). The IPCC is the United Nations body for assessing the science related to climate change.

    {{ community }} CSSR Acronyms and Units: The Climate Science Special Report (CSSR) is an assessment of the science of climate change with particular focus on the United States.

    "},{"location":"community_resources/training/","title":"Training and Policies","text":"

    This space is intended to promote training material relevant to ACCESS and its community. The training material can be directly relevant to ACCESS and its model components, such as:

    • using coupled models and model components
    • using configurations
    • using model evaluation tools and workflows

    It is also intended for training material around more peripheral topics that are essential for the community, such as:

    • HPC
    • version control
    • essential software packages

    ACCESS-NRI encourages the members of the community to contact us to share their suggestions.

    Finally, you will also find ACCESS-NRI's policies in this space.

    "},{"location":"community_resources/training/ACCESS_training/","title":"ACCESS Training Material","text":"

    This page is intended to provide access to training material directly related to ACCESS models and model components. This material can cover topics such as but not limited to:

    • how to use a specific model
    • how to modify a model configuration
    • how to test a model modification or validate a model run
    "},{"location":"community_resources/training/ACCESS_training/#jules-tutorials-recommended","title":"JULES tutorials {{ recommended }}","text":"

    The JULES tutorials explain how to use FCM, Rose and Cylc both for using the model and for development work. They can be useful to ACCESS users as practical demonstrations of the Rose and Cylc infrastructure.

    "},{"location":"community_resources/training/additional_training/","title":"Additional training material","text":"

    To learn the basics of Git and GitHub. It also includes ACCESS-NRI's recommendations to setup GitHub.

    The National Computational Infrastructure (NCI) provides training resources and in-person training courses throughout the year to help develop the skills of the NCI user community.

    A full calendar of upcoming training opportunities can be found on their Opus page.

    Users can find important information and resources about using NCI systems and services in the NCI User Guides.

    "},{"location":"contribute/","title":"How to Contribute","text":"

    ACCESS-Hive is a community supported site, as such contributions to the ACCESS-Hive site are encouraged by any member of the community. Members of the ACCESS community are also welcome to become reviewers. Please refer to the following contribution guidelines to learn how you can help the ACCESS community build a documentation database useful to everyone.

    Although we encourage everyone to get involved and contribute to the ACCESS-Hive in order to adequately represent the needs of the entire ACCESS community, we recognise not everyone will have the time to do so. In case you do not have a lot of time, please consider sharing your ideas via issues on the ACCESS-Hive GitHub repository so someone might be able to add them to the ACCESS-Hive site for you.1

    Abstract

    The aim of this how-to is to enable you to:

    • add or modify a link to a new documentation in an existing page
    • contribute complex modifications, eg., add pages, modify the navigation, modify an existing page in depth etc.
    • how to deal with relevant documentation that is not currently on a website
    "},{"location":"contribute/#become-a-member-of-the-access-hive-organisation","title":"Become a member of the ACCESS-Hive organisation","text":"

    The ACCESS-Hive organisation is open to any member of the ACCESS community. Furthermore, organisation members have write access to the ACCESS-Hive repository which simplifies the process to contribute. Members can work from branches that contain their modifications instead of creating and maintaining their own forks.

    As such, we encourage you to become a member of the organisation by replying to this issue and ask to be invited to join the organisation.

    "},{"location":"contribute/#process-to-contribute","title":"Process to contribute","text":"

    This documentation is based on the Material for Mkdocs theme. Please see the documentation for the theme or for Mkdocs for a full explanation of all the capabilities.

    The documentation is written in Markdown format. Please see this cheat sheet for a quick reference to the base syntax. Please note that Material for Mkdocs extends that syntax.

    Additionally, ACCESS-Hive is a portal for documentation hosted elsewhere. The documentation you want to add needs to be available from an existing website. We realise people might have standalone files or other information to share, please see our Standalone documentation page for ways to easily upload your documentation to a site.

    There are two main ways to contribute to the site:

    • you can modify an existing page directly from GitHub without any knowledge of Git. This is a simple way suitable to light modifications.
    • you can work on your local computer and use Git to manage your modifications. This is recommended for more involved modifications. It is the easiest way to modify the categories and menu structure used to navigate the site.
    1. Image by pch.vector on Freepik\u00a0\u21a9

    "},{"location":"contribute/change-the-navigation/","title":"Change the navigation","text":""},{"location":"contribute/change-the-navigation/#structure-of-the-repository","title":"Structure of the repository","text":"

    The important elements of the repository to know about before modifying the navigation are:

    • docs/ folder: this folder contains all the documentation pages. There is an index.md file for the About page, one folder per tab on the site, an assets/ folder to store images used in the documentation and some customisation folders such as css/ or font/.
    • mkdocs.yml: it is a YAML formatted file, hence the .yml extension. The site navigation is defined in this file as well as options for the styling of the site.
    YAML

    YAML is a popular choice for configuration files, as it is a simple way of encoding data structures in a text file. See this short tutorial.

    "},{"location":"contribute/change-the-navigation/#a-simple-example","title":"A simple example","text":"

    The easiest way to explain how the navigation is defined is to look at an example. Let's say mkdocs.yml contains:

    nav:\n- Welcome: index.md\n- ACCESS-NRI: ACCESS-NRI/ACCESS-NRI.md\n- Community: - Generate Bathymetry: community/bathymetry.md\n- How to contribute: - How to contribute: help/how_to_contribute.md\n- Setup: help/contribution_setup.md\n- Modify the documentation: help/modify_documentation.md\n- Change the navigation: help/change_navigation.md\n

    The top level category names define the tabs in the header bar. So here we have the tabs: \"Welcome\", \"ACCESS-NRI\", \"Community\" and \"How to contribute\". It is also the name of the top section under each tab.

    The second level of categories indicate the name of each page under that section. So the \"ACCESS-NRI\" tab has the text directly under the section \"ACCESS-NRI\". The \"Community\" tab has a section called \"Community\" that contains one page: \"Generate Bathymetry\". Finally, the \"How to contribute\" tab has 1 section \"How to contribute\" with 4 pages.

    The filenames indicate the path to the file relative to the docs/ folder containing the text for each page. It is recommended to use the title of each file (i.e. the heading level 1) or an abbreviation of it as the name of the page and the filename.

    "},{"location":"contribute/change-the-navigation/#add-sections-to-a-tab","title":"Add sections to a tab","text":"

    It is possible to define several sections per tab by using more levels of indentation. For example, to add a \"My example\" section to the \"How to contribute\" tab:

    nav:\n- Welcome: index.md\n- ACCESS-NRI: ACCESS-NRI/ACCESS-NRI.md\n- Community: - Generate Bathymetry: community/bathymetry.md\n- How to contribute: - How to contribute: help/how_to_contribute.md\n- Setup: help/contribution_setup.md\n- Modify the documentation: help/modify_documentation.md\n- Change the navigation: help/change_navigation.md\n- My example:\n- Beautiful example: help/beautiful_example.md\n

    will create this navigation:

    "},{"location":"contribute/edit-locally/","title":"Edit locally on your computer","text":"

    If you want to submit a substantial contribution to ACCESS-Hive, it might be easier to do so from your own computer. Especially, it is a lot easier to proceed locally if you need to modify several files or want to modify the navigation of the site.

    You can avoid creating your own fork for the repository by first becoming a member of ACCESS-Hive organisation. To become a member, please reply to this issue and ask to be invited to join the organisation.

    To work locally, you will need git and a text editor installed on your computer.

    "},{"location":"contribute/edit-locally/#open-an-issue","title":"Open an issue","text":"

    For all additions or modifications to the ACCESS-Hive site, it is recommended to start by opening an Issue in the ACCESS-Hive GitHub repository. Consider assigning the Issue to yourself in the right-hand side panel if you intend on working on the issue and you are a member of the ACCESS-Hive organisation.

    "},{"location":"contribute/edit-locally/#obtaining-the-source-files","title":"Obtaining the source files","text":"

    Everything is stored within the ACCESS-Hive repository on GitHub and you simply need to clone this repository to your local machine:

    git clone git@github.com:ACCESS-Hive/access-hive.github.io.git\n
    "},{"location":"contribute/edit-locally/#edit-to-access-hive","title":"Edit to ACCESS-Hive","text":"

    Once you have installed all you need, you will need to follow the usual series of steps when contributing to Open Source developments:

    • open an Issue
    • clone the repository locally
    • start a branch to work on, linked to the Issue
    • commit your modifications to that branch and push to GitHub
    • open a pull request between the main branch and your branch, follow the instructions from the Pull request template.
    • ask for reviews by tagging the ACCESS-Hive/reviewers team and reply to requests for changes

    If you don't know how to do these steps, please refer to our Git and GitHub training.

    What page to edit

    If you have problems finding the page you need to edit, the easiest way is to head to the ACCESS-Hive site. If you click on the pen icon at the top right of each page title, you will open a GitHub page showing you the path to the file you want to edit.

    Headers and table of content

    The level 1 headers are reserved for the title of the page and are ignored from the pages' table of contents. Only use level 2 headers and higher to organise pages.

    "},{"location":"contribute/edit-locally/#add-a-new-event","title":"Add a new event","text":"

    The process to add a new event is a bit different from other updates on the site. Since you need to create a new file to contain the information about the event you are adding, it is recommended to work locally. You need to create a new Markdown file (identified by the .md extension) as described on this page. To record and submit your modification to the site, make sure you follow all the steps as explained in the Open Source process in the previous section.

    "},{"location":"contribute/edit-locally/#preview-of-the-documentation","title":"Preview of the documentation","text":""},{"location":"contribute/edit-locally/#preview-from-a-pull-request","title":"Preview from a Pull Request","text":"

    When a pull request is created or updated, GitHub will automatically build a preview of the documentation that includes the proposed changes.

    In the pull request, you will see the link to the preview appear in this fashion:

    Build delay

    It can take a while for the preview to build, even after the CI check is indicated as finished. Please wait for the comment with the link to appear and allow for some time after that for the preview to be properly deployed.

    If you open the preview and it looks completely broken or if it hasn't updated from additional modifications in the pull request, it probably means the site hasn't finished building yet. If you wait a couple of minutes and refresh the page, it should fix itself.

    "},{"location":"contribute/edit-locally/#local-preview-if-editing-on-your-own-computer","title":"Local preview (if editing on your own computer)","text":"

    MkDocs includes a live preview server, so you can preview your changes as you write your documentation. The server will automatically rebuild the site upon saving.

    "},{"location":"contribute/edit-locally/#software-installation","title":"Software installation","text":"

    To build the site locally, you need to install [Material for Mkdocs][MatforMkdocs] and other plugins. You can find a full list in the requirements.txt file in the root of the ACCESS-Hive repository. Please use pip for the installation as some of packages are not updated or not available on conda:

    pip install -r requirements.txt\n
    "},{"location":"contribute/edit-locally/#start-the-server","title":"Start the server","text":"

    To start the server, open a terminal and navigate to your ACCESS-Hive local repository. Now type:

    mkdocs serve\n

    Your documentation will be built on http://127.0.0.1:8000. Open this URL in your browser to see a preview of the documentation. The URL is given in the terminal when running the mkdocs serve command. Make sure you keep the command running so as to see live updates on saving your modifications.

    "},{"location":"contribute/edit-on-github/","title":"Edit directly on GitHub","text":"

    This way to edit the site allows people with no knowledge of Git to contribute to ACCESS-Hive but is only suitable for light modifications of existing pages.

    • Go to the page you want to modify on the ACCESS-Hive documentation site. At the right of the title, you will see a pen icon . When you click on this icon, your browser will open the file in GitHub allowing you to edit the file.
    • Enter your modification in the main pane. All the files are written in Markdown.
    Headers and table of content

    The level 1 headers are reserved for the title of the page and are ignored from the pages' table of contents. Only use level 2 headers and higher to organise pages.

    • Then add a commit message in the Commit changes box.
    • Commit and open a pull request
    Pull request is required

    The main branch of the repository is protected and nobody can write to it directly. You will need to choose either to create a new branch (for ACCESS-Hive organisation members only) or to create a fork on your GitHub personal account (for non-members of ACCESS-Hive organisation) and then open a pull request in all cases.

    When creating the pull request, make sure to follow the instructions given to you in the pull request template. The description can be edited at any time. You can fill in the check list after creating the pull request. The pull request will automatically build a preview of the documentation with your proposed changes.

    • Ask for a review by tagging the @ACCESS-Hive/reviewers team in a comment.

    • Reply to the review. You will be notified by email of any subsequent comment, request or action from the reviewer on this pull request. Please make sure you take any action required by the reviewer or your modification will not be accepted into the ACCESS-Hive site.

    "},{"location":"contribute/edit-on-github/#further-edits","title":"Further edits","text":"

    During the review process, you might be requested to edit your proposed changes. For this, you will need to navigate to the branch created by GitHub.

    • At the top of the Pull request window on GitHub, you should see a link to your branch, circled in red on the image:
    • Once you click on this link, navigate to and open the file you need to modify, then click on the pen icon in the toolbar on the right, circled in red on the image:
    • Then commit your changes once again to the same branch. This will update the pull request and the preview of the site.

    • You need to let the reviewer know once you are confident you have responded to all their concerns, so they can review again. For this, locate the \"Reviewers\" pane in the right-hand side menu list on GitHub and click the icon circled in red in the image:

    "},{"location":"contribute/reviewers/","title":"Reviewing for ACCESS-Hive","text":"

    Any member of the ACCESS-Hive Github Community (to join) can join the reviewers team. Please ask one of the maintainers to invite you to join the reviewers team.

    "},{"location":"contribute/reviewers/#reviewer-guidelines","title":"Reviewer Guidelines","text":"

    Firstly, thank you so much for helping to review links submitted to the ACCESS-Hive, we\u2019re delighted to have your help. This document is designed to outline our editorial guidelines and help you understand our requirements for accepting a pull request (PR).

    "},{"location":"contribute/reviewers/#guiding-principles","title":"Guiding principles","text":"

    If the submitting authors have followed the contribution guidelines then the review should be rapid. An important requirement is the ACCESS-Hive is a portal to documentation, it does not host the documentation itself.

    For those PRs that don\u2019t quite meet the requirements, please try to give clear feedback on what needs fixing. Our goal is to maintain a high quality platform for exchanging links to relevant documentation and you, as a reviewer, have a key role to play.

    A review involves checking submissions against a checklist of essential features and details described at the top of each PR. This should be objective, not subjective; it should be based on the materials in the submission as perceived without distortion by personal feelings, prejudices, or interpretations.

    Some continuous tests such as hyperlink references checks and preview deployments are automatically triggered by submitting a PR.

    Reviewers should:

    1. Ensure that the tests are passing without errors.
    2. Do a visual check using the preview.
    3. Do a Github pull request review. See GitHub's extensive documentation
    4. Once you have approved the PR. Tag the editors team @ACCESS-Hive/editors in the discussion.

    We encourage reviewers to provide feedback from within the PR discussion.

    You can include in your review links to any new issues that you the reviewer believe to be impeding the acceptance of the pull request.

    "},{"location":"contribute/standalone-documentation/","title":"Standalone documentation","text":"

    You may have very valuable resources to share which are not currently available through any website, it is what we call \"standalone documentation\" in the current context.

    To contribute these resources to ACCESS-Hive, you will first need to make them available on the internet. Below are some ideas on how to do that:

    1. Check if your organisation has a documentation or information site suitable for your resource.
    2. Check if ACCESS-NRI has a documentation site suitable for your resource. Note in this case, you will be asked about ownership and license associated with your resource.
    3. Publish your documentation via Zenodo. This will clarify the licensing and reuse conditions.

    If none of the previous options seem suitable to you, please consult the forum.

    "},{"location":"model_evaluation/","title":"Model Evaluation and Diagnostics (MED)","text":"

    Model evaluation is about measuring how fit for purpose a particular model is.

    If you are new to model evaluation and diagnostics, we recommend you read our Getting Started with MED page.

    Here, we provide catalogs and pointers to observational data as well as model data that can be used for evaluation. We provide tools to process such data into a comparable format and a gallery of recipes to evaluate the formatted data.

    Observational Data Catalog Model Data Catalog Data Format Processing Evaluation Recipe Gallery

    Our vision: PLACEHOLDER FOR OUTCOME OF STAFF RETREAT

    "},{"location":"model_evaluation/model_evaluation_data_processing/","title":"Data Processing Tools","text":"

    On this page, we will provide you a list of currated data processing tools.

    While we are still ramping up this service, please take a look at the gallery of community tools on Community Resources -> Community Data Processing Tools {{ community }}.

    "},{"location":"model_evaluation/model_evaluation_live_diagnostics/","title":"Live Diagnostics on Gadi","text":"

    Here, we will describe the tools we provide for \"live-diagnostics\" of the ACCESS configurations. These tools allow you to check model output/progress/failures at specified time steps while your model is running.

    "},{"location":"model_evaluation/model_evaluation_observational_catalogs/","title":"Observational Data Catalog","text":"

    As of June 2023, we provide the following observational data catalogs on Gadi@NCI through the project kj13:

    /g/data/kj13/datasets\n\u251c\u2500\u2500 cmip6\n\u2502   \u2514\u2500\u2500 *.nc\n\u251c\u2500\u2500 esmvaltool\n\u2502   \u2514\u2500\u2500 obsdata-v2/\n\u2502       \u251c\u2500\u2500 Tier1\n\u2502       \u2502   \u251c\u2500\u2500 AIRS\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 *.nc\n\u2502       \u2502   \u2514\u2500\u2500  ...\n\u2502       \u251c\u2500\u2500 Tier2\n\u2502       \u2502   \u251c\u2500\u2500 BerkeleyEarth\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 *.nc\n\u2502       \u2502   \u2514\u2500\u2500  ...\n\u2502       \u2514\u2500\u2500 Tier3\n\u2502           \u251c\u2500\u2500 ERA5\n\u2502           \u2502   \u2514\u2500\u2500 OBS6_ERA5_reanaly_v1_Amon_pr_197901-202012.nc\n\u2502           \u2514\u2500\u2500  ...\n\u251c\u2500\u2500 ilamb\n\u2502   \u2514\u2500\u2500 DATA\n\u2502       \u251c\u2500\u2500 albedo\n\u2502       \u2502   \u251c\u2500\u2500 CERESed4.1\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 albedo.nc\n\u2502       \u2502   \u2514\u2500\u2500 ...\n\u2502       \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 iomb\n\u2502   \u2514\u2500\u2500 DATA\n\u2502       \u251c\u2500\u2500 silicate\n\u2502       \u2502   \u251c\u2500\u2500 GLODAP\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 *albedo*.nc\n\u2502       \u2502   \u2514\u2500\u2500 ...\n\u2502       \u2514\u2500\u2500 ...\n

    If you want to use but do not have access to Gadi@NCI yet, please follow our instructions on how to Get Started with Model Evaluation.

    Dataset name Downloaded /g/data/kj13/datasets/ AIRS 2020-12-04 Tier1/AIRS AIRS-2-0 2020-12-04 Tier1/AIRS-2-0 AIRS-2-1 2021-07-14 Tier1/AIRS-2-1 ATSR 2020-12-04 Tier1/ATSR CALIOP 2020-12-04 Tier1/CALIOP CERES-EBAF 2020-12-04 Tier1/CERES-EBAF CFSR 2020-12-04 Tier1/CFSR CloudSat 2020-12-04 Tier1/CloudSat ESACCI-GHG 2020-12-04 Tier1/ESACCI-GHG GPCP-SG 2020-12-04 Tier1/GPCP-SG ISCCP 2021-04-08 Tier1/ISCCP JRA-55 2022-11-15 Tier1/JRA-55 MODIS 2020-12-04 Tier1/MODIS SSMI 2020-12-04 Tier1/SSMI SSMI-MERIS 2021-10-19 Tier1/SSMI-MERIS TRMM-L3 2020-12-04 Tier1/TRMM-L3 ghgcci 2020-12-04 Tier1/ESACCI-GHG BerkeleyEarth 2020-12-04 Tier2/BerkeleyEarth CALIPSO-GOCCP 2023-01-28 Tier2/CALIPSO-GOCCP CERES-EBAF 2022-08-12 Tier2/CERES-EBAF CRU 2021-07-14 Tier2/CRU CT2019 2020-12-04 Tier2/CT2019 CowtanWay 2020-12-04 Tier2/CowtanWay Duveiller2018 2020-12-04 Tier2/Duveiller2018 E-OBS 2020-12-04 Tier2/E-OBS ESACCI-AEROSOL 2023-01-28 Tier2/ESACCI-AEROSOL ESACCI-CLOUD 2023-01-28 Tier2/ESACCI-CLOUD ESACCI-FIRE 2023-01-28 Tier2/ESACCI-FIRE ESACCI-LANDCOVER 2023-01-28 Tier2/ESACCI-LANDCOVER ESACCI-LST 2022-01-26 Tier2/ESACCI-LST ESACCI-OC 2022-01-26 Tier2/ESACCI-OC ESACCI-OZONE 2023-01-28 Tier2/ESACCI-OZONE ESACCI-SEA-SURFACE-SALINITY 2022-01-26 Tier2/ESACCI-SEA-SURFACE-SALINITY ESACCI-SOILMOISTURE 2023-01-28 Tier2/ESACCI-SOILMOISTURE ESACCI-SST 2023-01-28 Tier2/ESACCI-SST ESRL 2021-04-08 Tier2/ESRL Eppley-VGPM-MODIS 2020-12-05 Tier2/Eppley-VGPM-MODIS GCP2018 2021-11-08 Tier2/GCP2018 GCP2020 2021-11-08 Tier2/GCP2020 GHCN 2023-01-28 Tier2/GHCN GHCN-CAMS 2020-12-05 Tier2/GHCN-CAMS GISTEMP 2020-12-05 Tier2/GISTEMP GLODAP 2022-04-29 Tier2/GLODAP GPCC 2021-04-08 Tier2/GPCC HALOE 2023-01-28 Tier2/HALOE HadCRUT3 2023-01-28 Tier2/HadCRUT3 HadCRUT4 2023-01-28 Tier2/HadCRUT4 HadCRUT4-clim 2020-12-04 Tier2/HadCRUT4-clim HadCRUT5 2022-04-29 Tier2/HadCRUT5 HadISST 2023-01-28 Tier2/HadISST ISCCP-FH 2023-01-28 Tier2/ISCCP-FH JRA-25 2022-11-24 Tier2/JRA-25 Kadow2020 2022-08-12 Tier2/Kadow2020 Landschuetzer2016 2020-12-05 Tier2/Landschuetzer2016 Landschuetzer2020 2022-11-15 Tier2/Landschuetzer2020 MOBO-DIC_MPIM 2022-11-15 Tier2/MOBO-DIC_MPIM NCEP 2023-01-28 Tier2/NCEP NCEP-DOE-R2 2023-01-28 Tier2/NCEP-DOE-R2 NCEP-NCAR-R1 2023-01-28 Tier2/NCEP-NCAR-R1 NOAA-CIRES-20CR 2023-01-28 Tier2/NOAA-CIRES-20CR NOAAGlobalTemp 2022-08-12 Tier2/NOAAGlobalTemp OSI-450-nh 2020-12-05 Tier2/OSI-450-nh OSI-450-sh 2020-12-05 Tier2/OSI-450-sh OceanSODA-ETHZ 2022-11-17 Tier2/OceanSODA-ETHZ PATMOS-x 2023-01-28 Tier2/PATMOS-x PERSIANN-CDR 2020-12-05 Tier2/PERSIANN-CDR PHC 2020-12-05 Tier2/PHC PIOMAS 2020-12-05 Tier2/PIOMAS REGEN 2020-12-05 Tier2/REGEN Scripps-CO2-KUM 2022-04-29 Tier2/Scripps-CO2-KUM TCOM-CH4 2023-01-28 Tier2/TCOM-CH4 TCOM-N2O 2023-01-28 Tier2/TCOM-N2O WFDE5 2021-07-14 Tier2/WFDE5 WOA 2021-10-19 Tier2/WOA AURA-TES 2023-06-14 Tier3/AURA-TES CALIPSO-ICECLOUD 2023-06-14 Tier3/CALIPSO-ICECLOUD CDS-XCH4 2023-06-14 Tier3/DS-XCH4 CDS-XCO2 2023-06-14 Tier3/CDS-XCO2 ERA-Interim 2022-11-24 Tier3/ERA-Interim ERA-Interim-Land 2021-09-14 Tier3/ERA-Interim-Land ERA5 2021-02-12 Tier3/ERA5 ERA5-Land 2023-06-14 Tier3/ERA5-Land FLUXCOM 2022-01-26 Tier3/FLUXCOM HWSD 2023-06-14 Tier3/HWSD LandFlux-EVAL 2023-06-14 Tier3/LandFlux-EVAL MAC-LWP 2023-06-14 Tier3/MAC-LWP NIWA-BS 2023-06-16 Tier3/NIWA-BS"},{"location":"model_evaluation/model_evaluation_observational_catalogs/#todo-tier2","title":"ToDo Tier2","text":"Dataset name Downloaded /g/data/kj13/datasets/ ESDC Tier2/"},{"location":"model_evaluation/model_evaluation_observational_catalogs/#todo-tier3","title":"ToDo Tier3","text":"Dataset name Downloaded /g/data/kj13/datasets/ APHRO-MA Tier3/ CDS-SATELLITE-ALBEDO Tier3/ CDS-SATELLITE-LAI-FAPAR Tier3/ CDS-SATELLITE-SOIL-MOISTURE Tier3/ CDS-UERRA Tier3/ CERES-SYN1deg Tier3/ CLARA-AVHRR Tier3/ CLOUDSAT-L2 Tier3/ ESACCI-WATERVAPOUR Tier3/ GRACE Tier3/ JMA-TRANSCOM Tier3/ LAI3g Tier3/ MERRA2 Tier3/ MLS-AURA Tier3/ MTE Tier3/ NDP Tier3/ NIWA-BS Tier3/ NSIDC-0116-nh Tier3/ NSIDC-0116-sh Tier3/ UWisc Tier3/"},{"location":"model_evaluation/model_evaluation_recipe_gallery/","title":"Model Evaluation and Diagnostics (MED) Recipe Gallery","text":"

    While we are still building this gallery, please have a look at the Community MED Recipes listed at Community Resources -> Community Model Evaluation Recipes {{ community }}

    Here, we plan to provide you with an embedded link to our actively maintained Model Evaluation and Dianostics (MED) Recipe Gallery, hosted at medportal.herokuapp.com. For now, we provide a placeholder image with link and pointers to useful Model Evaluation and Dianostics (MED) resources.

    "},{"location":"model_evaluation/model_evaluation_recipe_gallery/#link-to-our-med-recipe-gallery-supported","title":"Link to our MED Recipe Gallery {{ supported }}","text":""},{"location":"model_evaluation/model_evaluation_getting_started/","title":"Getting Started with Model Evaluation at NCI","text":"

    Welcome to Model Evaluation and Diagnostics!

    Here, we provide you the important information to give you access to the large data that we curate at NCI's storage and show you how you can use it to figure out how fit for purpose specific models are, in particular when you compare them to osbervational data:

    1) Getting Access to NCI and relevant NCI projects 2) Setting up environments at gadi@NCI to load and evaluate observational and model data

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/","title":"Getting Started: Computing Access (Gadi@NCI)","text":"

    Here, we provide you the important information to give you access to the large data that we curate at NCI's storage:

    1) Get an NCI Account 2) Join relevant NCI projects 3) Logging in to Gadi@NCI 4) Computing on Gadi

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#1-nci-account","title":"1) NCI Account","text":"

    To be able to work with our data, you need an NCI account.

    If you don't have one yet, signup here.

    Note: You will need an institutional email address with an organisation that allows access to NCI (e.g., CSIRO, a university, etc.).

    Once you have signed up, you will be allocated a username. We will refer to this username (e.g. kf1234) as $USER.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#2-join-relevant-nci-projects","title":"2) Join relevant NCI projects","text":"

    There is a plethora of NCI projects that may or may not be relevant for you.

    We recommend you have a chat with your supervisor to identify the relevant projects, but in any case suggest to join xp65 for MED code as well as kj13 for MED data.

    To get this conversation started, we list some possibly relevant projects below:

    Project Description with link, * indicated compute resource ACCESS-NRI projects tm70 ACCESS-NRI Working Project * iq82 ACCESS-NRI MED Compute * kj13 ACCESS-NRI MED Data Dev ct11 ACCESS-NRI Replicated Datasets xp65 ACCESS-NRI Analysis Environments ACCESS projects access ACCESS software sharing p66 ACCESS - AOGCM / suppport development of the ACCESS modelling system * p73 ACCESS Model Output Archive (AOGCM) Data projects hh5 Climate-LIEF Data Storage ub7 Seasonal Prediction ACCESS-S1 Hindcast ux62 Seasonal Prediction ACCESS-S2 Hindcast cb20 ESGF CMIP3 Replication Data al33 ESGF CMIP5 Replication Data rr3 ESGF CMIP5 Australian Data Publication oi10 ESGF CMIP6 Replication Data fs38 ESGF CMIP6 Australian Data Publication rt52 ERA5 Replicated Data: Single and pressure-levels data uc16 ERA5 Replicated Datasets on Potential Temperature & Vorticity Levels zz93 ERA5-Land Replicated Data zv2 Australian Gridded Climate Data (AGCD) Collection qv56 Reference Datasets for Climate Model Analysis/Forcing cj50 COSIMA Model Output Collection Other projects ik11 COSIMA shared working space v45 Ocean Extremes * ga6 Modelling the formation of sedimentary basins and continental margins * m18 Evolution and dynamics of the Australian lithosphere * q97 Earth dynamics and resources over the last billion years * qu79 Collaborative REAnalysis Technical Environment Intercomparison Project (CREATE-IP)

    To join a project or find more projects, please use this NCI website.

    The first project that you join will become your default login project, e.g. xp65. We will refer to it as $PROJECT and we show you how to change it below.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#3-logging-in-to-gadinci","title":"3) Logging in to Gadi@NCI","text":"

    If you have never logged onto Gadi before, we recommend to take a look at NCI's Welcome to Gadi website. It provides all the important commands and information for logging properly onto Gadi, like the following: \"To run jobs on Gadi, you need to first log in to the system. Users on Mac/Linux can use the built-in terminal. For Windows users, we recommend using MobaXterm as the local terminal. Logging in to Gadi happens through a Gadi login node.\"

    When you login, via the command

    ssh -Y $USER@gadi.nci.org.au\n
    you will enter your $HOME directory with your default $PROJECT and your default SHELL. Both are saved at $HOME/.config/gadi-login.conf and you can print them via
    cat $HOME/.config/gadi-login.conf\n

    The -Y option is needed to run graphical tools by enabling the forwarding of trusted X protocol mesgs between X-Server on local system and X programs on Gadi. You need to enable X Windowing system on your local system before running ssh. This can be done by running X-Server like XQuartz (Mac), MobaXterm (MS Windows), startx or similar (Linux).

    Again, for more useful information we recommend to check out NCI's Welcome to Gadi website.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#4-computing-on-gadi","title":"4) Computing on Gadi","text":""},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#gadi-resources","title":"Gadi Resources","text":"

    Coupled climate models like ACCESS-CM involve, among other things, calculation of complex mathematical equations that explain the physics of the atmosphere and oceans. Performed at hundreds of millions of points around the Earth, these calculations require vast computing power to complete them in a reasonable amount of time, thus relying on the power of high-performance computing (HPC) like Gadi. The Gadi supercomputer can handle more than 10 million billion (10 quadrillion) calculations per second and is connected to 100,000 Terabytes of high-performance research data storage.

    An overview of Gadi resources such as compute, storage and PBS jobs are described below.

    Useful NCI commands to check your available compute resources are:

    Command Purpose logout or Ctrl+D To exit a session hostname Displays login node details module list Modules currently loaded module avail Available modules nci_account -P [proj] Compute allocation for [proj] nqstat -P [proj] Jobs running/queued in [proj] lquota Storage allocation and usage for all your projects"},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#compute-hours","title":"Compute Hours","text":"

    Compute allocations are granted to projects instead of directly to users and, hence, you need to be a member of a project in order to use its compute allocation. To run jobs on Gadi, you need to have sufficient allocated compute hours available, where the job cost depends on the resources reserved for the job and the amount of walltime it uses.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#storage","title":"Storage","text":"

    Each user has a project-independent $HOME directory, which has a storage limit of 10 GiB. All data on /home is backed up.

    Through project membership, the user gets access to the storage space within the project folders /scratch and /g/data filesystems for that particular project.

    "},{"location":"model_evaluation/model_evaluation_getting_started/access_to_gadi_at_nci/#pbs-jobs","title":"PBS Jobs","text":"

    To run compute tasks such as an ACCESS-CM suite on Gadi, users need to submit them as jobs to queues. Within a job submission, you can specify the queue, duration and computational resources needed for your job. When a job submission is accepted, it is assigned a jobID (shown in the return message) that can then be used to monitor the job\u2019s status.

    On job completion, contents of the job\u2019s standard output/error stream gets copied to a file in the working directory with the respective format: <jobname>.o<jobid> and <jobname>.e<jobid>. Users should check these two log files before proceeding with post-processing of any output from their corresponding job.

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/","title":"Getting Started with Model Evaluation at Gadi@NCI","text":"

    At this stage of Getting Started, we assume that you already have access to Gadi@NCI. If this is not the case, please go to our instructions on how to get access to Gadi@NCI

    Here we describe where you can find, load, and evalulate observational and model data on Gadi.

    Note: You do not automatically have access to all of Gadi's storage at /g/data/, but need to be part of a $PROJECT to see files at /g/data/$PROJECT. Furthermore, if you use Gadi's job submission system PBS (Portable Batch System), you need to add the relevant storage to the #PBS -l storage=gdata/xp65+gdata/kj13 (if you want the job to have access to xp65 and kj13 in this example).

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/#21-access-med-our-currated-conda-environment-for-you-on-gadi","title":"2.1) access-med: Our currated conda environment for you on Gadi","text":"

    To avoid running multiple (different) versions of code on Gadi, we provide you with a conda environment called access-med that we actually curate for you (version 0.1 is from June 2023).

    In order to change to this environment, please execute the following commands after loggin onto Gadi (and as part of your PBS scripts):

    $ module use /g/data/xp65/public/modules\n$ module load conda/access-med-0.1\n

    If you are planning to run your code through JupyterLab on NCI's ARE, you need to use /g/data/xp65/public/modules as Module directories and conda/are as Modules when launching a JupyterLab session.

    You are now able to use the scripts of our currated environment, including python3, intake, jupyter, esmvaltool, or ilamb. The complete list of dependencies can be found in our dedicated GitHub repository.

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/#22-observational-data","title":"2.2) Observational Data","text":"

    We provide an extensive collection of observational data on Gadi@NCI within the /g/data/kj13/datasets directory.

    Please take a look at our Observational Data Catalog for an overview.

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/#23-model-data","title":"2.3) Model Data","text":"

    There are many models and data stored on Gadi, as you can imagine from the plethora of projects in Section 1.2. Downloading this data is hardly practical, so we suggest to work on Gadi instead.

    To avoid endless searches within Gadi's storage, we written a useful 'library' tool, called access-nri-catalog, that allows to search the Model Catalogs easily and is already loaded as part of our access-med conda environment. To find out how you can search for Model Data on Gadi, take a look at our Model Catalog.

    "},{"location":"model_evaluation/model_evaluation_getting_started/model_evaluation_getting_started/#25-model-evaluation","title":"2.5) Model Evaluation","text":"

    Now that you have both Observational Data and Model Data at the palm of your hand on Gadi@NCI, there are many ways to evaluate this data.

    As part of our ACCESS-NRI conda environment, we provide several Model Evaluation Tools, like ilamb or esmvaltool.

    Check out Model Evaluation at Gadi to find out how you can use them on Gadi.

    "},{"location":"model_evaluation/model_evaluation_model_catalogs/","title":"ACCESS-NRI intake Model Catalog","text":"

    ACCESS-NRI is hosting a number of calculated models for you through National Computational Infrastructure (NCI) storage.

    We have set up an ACCESS-NRI intake Catalog package that allows you to easily search and load the model data on this storage. The premise of this ACCESS-NRI intake Catalog is to provide a (\"meta\") catalog of intake-esm (\"sub\") catalogs, which each correspond to different \"experiments\".

    Search for a model in the ACCESS-NRI intake Catalog Add your model data to the ACCESS-NRI intake Catalog"},{"location":"model_evaluation/model_evaluation_model_catalogs/model_evaluation_add_models/","title":"Add your model data to the ACCESS-NRI intake Catalog","text":"

    You've just run a new experiment, now you want to create an intake-esm catalog for that experiment?

    Look at this Tutorial {{ supported }} to learn how to add your own models.

    "},{"location":"model_evaluation/model_evaluation_model_catalogs/model_evaluation_search_models/","title":"Search for a model in the ACCESS-NRI intake Catalog","text":"

    To have the huge amount of data from different experiments on the NCI storage at the palm of your hand, we provide a (\"meta\") catalog for you to query via python as part of the intake package with our curated catalog plugin intake.cat.access_nri {{ supported }}.

    To use this catalog, you need access to NCI's Gadi. Check out our Get Started with ACCESS at NCI {{ supported }} guide on how to get access.

    Once logged in to Gadi, you will need to add the access-nri-catalog to your conda environments and start an ARE JupyterLab Session. Check out our ACCESS-NRI Intake Catalog guide {{ supported }} for the specific setup (note that you can only read in data from specific experiments if they are loaded through the Storage keyword).

    Once your JupyterLab session started, you can access the intake catalog to load the data. Take a look at this Tutorial {{ supported }}.

    # Impport packages for searching/loading/plotting\nimport intake\nfrom distributed import Client\nimport matplotlib.pyplot as plt\n\n# The search process is a 2-step one\n# Comparable with searching for a book in a library:\n# 1) You look for the right book/catalog sections\n# 2) You look for the right book/catalog in the these sections\n\n# Load the ACCES-NRI list of catalogs for available experiment data\n# Similar to an overview of library section\naccess_nri_catalog_sections = intake.cat.access_nri\n\n# Perform a search for names, models, variables etc.\nexample_section_search = access_nri_catalog_sections.search(name=\"cmip6_oi10\")\n\n# Once you are sufficiently happy with your search, you can load the \"section\"\ncatalog_sections = access_nri_catalog_sections.search(name=\"025deg_jra55_iaf_omip2_cycle1\").to_source()\n# and start looking for the right catalogs of interest\ncatalogs_of_interest = catalog_sections.search(filename=\"ocean_scalar.*\")\n\n# Call the client that allows use load the data efficiently\nclient = Client(threads_per_worker=1)\nclient.dashboard_link\n\n# Actually load the data\nexperiment_data = catalogs_of_interest.to_dataset_dict(progressbar=False)\n\n# Et voil\u00e0, you have loaded the data and can start plotting\nexperiment_data[\"ocean_scalar_snapshot.1day\"][\"temp_global_ave\"].plot(label=\"daily\")\nexperiment_data[\"ocean_scalar.1mon\"][\"temp_global_ave\"].plot(label=\"monthly\")\n_ = plt.legend()\n
    "},{"location":"model_evaluation/model_evaluation_on_gadi/","title":"Model Evaluation on Gadi","text":"

    To kick-start your model evaluation efforts, we provide the following tools as part of our access-med conda environment (and tutorials for how to use them on Gadi@NCI): - ilamb, a tool for International Land (and Ocean) Model Benchmarking. - esmvaltool, an Earth System Model Evaluation Tool

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_esmvaltool/","title":"Tutorial for using esmvaltool on Gadi@NCI","text":""},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/","title":"Tutorial for using ilamb on Gadi@NCI","text":"

    This tutorial explains how you can setup and run International Land Model Benchmarking (ILAMB) and International Ocean Model Benchmarking (IOMB) tests on NCI infrastracture. Both projects are maintained as python code under the package name ilamb.

    The Tutorial contains:

    1. Background
    2. Installation guide
    3. Setup details
    4. Run ilamb
    5. Run liamb on NCI
    6. Fix your setup with ilamb-doctor
    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#1-background-international-land-model-benchmarking-ilamb-and-international-ocean-model-benchmarking-iomb","title":"1. Background: International Land Model Benchmarking (ILAMB) and International Ocean Model Benchmarking (IOMB)","text":"

    As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to improve the performance of land models and, in parallel, improve the design of new measurement campaigns to reduce uncertainties associated with key land surface processes.

    If you have used (and installed) ilamb on NCI and know the basic principle of ilamb, you can start from Section 5) Guide for using ilamb on NCI.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#2-installing-ilamb","title":"2. Installing ilamb","text":"

    For NCI users, ACCESS-NRI is providing a conda environment called ilamb_dev through the xp65 project, with ilamb installed. You can load and activate it via:

    >>> module use /g/data/xp65/public/modules\n>>> module load conda/ilamb_dev\n>>> conda activate ilamb_dev\n

    We will soon add ilamb also to the ACCESS-NRI MED conda environment, access-med under projectxp65.

    If you want to install ilamb yourself, please follow the official installation instructions at https://www.ilamb.org/doc/install.html.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#3-configuring-ilamb","title":"3. Configuring ilamb","text":"

    Before you can run ilamb, you need to configure a few things:

    3.1. Organise the ILAMB_ROOT path 3.2. Set up a config file 3.3. Set up a modelroute and regions files (Optional, if you want to run only a subset of models and/or specific regions of the world) 3.4. Download a shapefiles files locally (Optional online, necessary offline e.g. on NCI compute nodes)

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#31-organise-the-ilamb_root-path","title":"3.1 Organise the ILAMB_ROOT path","text":"

    ilamb demands files to be organised in a specific directory structure of DATA and MODELS.

    If you do not have your own files yet, you can download and use example files provided as part of the of ilamb's First Steps Tutorial

    The following tree represents the organization of the contents of this extracted sample data (Note: We renamed the main directory name):

    $ILAMB_ROOT (renamed from \"ILAMB_sample\")\n\u251c\u2500\u2500 sample.cfg (see [Section 3.2](#32-set-up-a-config-file))\n\u251c\u2500\u2500 modelroute.txt (optional, see [Section 3.3](#33-set-up-modelroute-and-regions-files))\n\u251c\u2500\u2500 regions.txt (optional, see [Section 3.3](#33-set-up-modelroute-and-regions-files))\n\u251c\u2500\u2500 DATA\n\u2502   \u251c\u2500\u2500 albedo\n\u2502   \u2502   \u2514\u2500\u2500 CERES\n\u2502   \u2502       \u2514\u2500\u2500 albedo_0.5x0.5.nc\n\u2502   \u2514\u2500\u2500 rsus\n\u2502       \u2514\u2500\u2500 CERES\n\u2502           \u2514\u2500\u2500 rsus_0.5x0.5.nc\n\u2514\u2500\u2500 MODELS\n    \u2514\u2500\u2500 CLM40cn\n        \u251c\u2500\u2500 rsds\n        \u2502   \u2514\u2500\u2500 rsds_Amon_CLM40cn_historical_r1i1p1_185001-201012.nc\n        \u2514\u2500\u2500 rsus\n            \u2514\u2500\u2500 rsus_Amon_CLM40cn_historical_r1i1p1_185001-201012.nc\n

    There are two main branches in this directory. The first is the DATA directory\u2013this is where we keep the observational datasets each in a subdirectory bearing the name of the variable. While not strictly necesary to follow this form, it is a convenient convention. The second branch is the MODEL directory in which we see a single model result from CLM.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#311-add-files-to-data","title":"3.1.1 Add files to DATA","text":"

    There is a lot of DATA available to add. Take a look at https://www.ilamb.org/ILAMB-Data/ and https://www.ilamb.org/IOMB-Data/ for extensive lists for ILAMB-Data (land modelling) and IOMB-Data (ocean modelling).

    ilamb has a commandline prompt to add new DATA in a substructure. To fetch all available DATA from the website, you can simply go to your $ILAMB_ROOT and type

    >>> ilamb-fetch\n

    The command will compare the above website with your current DATA and make suggestions for downloads:

    Comparing remote location:\n\n      https://www.ilamb.org/ILAMB-Data/\n\nTo local location:\n\n      ./\n\nI found the following files which are missing, out of date, or corrupt:\n\n      .//DATA/twsa/GRACE/twsa_0.5x0.5.nc\n      .//DATA/rlus/CERES/rlus_0.5x0.5.nc\n      ... (we have abbreviated the extensive list here)\n\nDownload replacements? [y/n]\n

    You can use ilamb-fetch -h to see how you can adjust the remote and local locations. If you want to download IOMB-Data, you can for example use

    ilamb-fetch --remote_root https://www.ilamb.org/IOMB-Data/\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#312-add-files-to-model","title":"3.1.2 Add files to MODEL","text":"

    If you want to add your own MODEL to ilamb, you can do so by following this description.

    For NCI users, our ilamb_dev conda enrivonment already provides all observational datasets from the ilamb official web and the ACCESS-ESM1_5 model result for user at ILAMB_ROOT. Stay tune for more observational and model data or tell us which ones we should definitely add.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#32-set-up-a-config-file","title":"3.2 Set up a config file","text":"

    Now that we have the data, we need to setup a config file which the ilamb package will use to initiate a benchmark study.

    ilamb provides default config files here.

    Below we explain both which variables you can define, but start by showing you the minimum setup from the tutorial's. sample.cfg file:

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#minimum-configure-file-with-a-direct-and-a-derived-variable","title":"Minimum configure file with a direct and a derived variable","text":"
    # This configure file specifies the variables\n\n[h1: Radiation and Energy Cycle]\n\n[h2: Surface Upward SW Radiation]\nvariable = \"rsus\"\n\n[CERES]\nsource   = \"DATA/rsus/CERES/rsus_0.5x0.5.nc\"\n\n[h2: Albedo]\nvariable = \"albedo\"\nderived  = \"rsus/rsds\"\n\n[CERES]\nsource   = \"DATA/albedo/CERES/albedo_0.5x0.5.nc\"\n

    In brief: This file allows you to create different header descriptions of the experiments (h1: top level for grouping of variables, h2: sub-level for each variable), but most importantly the variables that we will look into and their source. In the eaxmple, rsus (Surface Upward Shortwave Radiation) and albedo are the used variables. The latter is actually derived from two variables by dividing the Surface Upward Shortwave Radiation by the Surface Downward Shortwave Radiation, derived = rsus/rsds. Finally, sources are defined as source with a text-font header without h1 or h2.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#changing-configure-file-variables","title":"Changing configure file variables","text":"

    This is the list of all the variables you can modify in config file:

    source              = None\n#Full path to the observational dataset\n\ncmap                = \"jet\"\n#The colormap to use in rendering plots (default is 'jet')\n\nvariable            = None\n#Name of the variable to extract from the source dataset\n\nalternate_vars      = None\n#Other accepted variable names when extracting from models\n\nderived             = None\n#An algebraic expression which captures how the confrontation variable may be generated\n\nland                = False\n#Enable to force the masking of areas with no land (default is False)\n\nbgcolor             = \"#EDEDED\"\n#Background color\n\ntable_unit          = None\n#The unit to use when displaying output in tables on the HTML page\n\nplot_unit           = None\n#The unit to use when displaying output on plots on the HTML page\n\nspace_mean          = True\n#Disable to compute sums of the variable over space instead of mean values\n\nrelationships       = None\n#A list of confrontations with whose data we use to study relationships, the syntax is \"h2 tag/observational dataset\". You will see the relationship part in the output if you specify some relationship.\n\nctype               = None\n#Choose a specific Confrontion class. \n\nregions             = None\n#Specify the regions of confrontation\n\nskip_rmse           = False\n#akip rmse in program\n\nskip_iav            = True\n#Ship iav in program\n\nmass_weighting      = False\n#if switch to true, using an average data in a period to normalize\n\nweight              = 1    \n# if a dataset has no weight specified, it is implicitly 1\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#33-set-up-modelroute-and-regions-files","title":"3.3 Set up modelroute and regions files","text":"

    If you plan to run only a specific subset of models, you can already define them in a modelroute.txt file. It could look like our specific example for running different versions (1, 2, and 3) of the ACCESS-ESM 1.5 suite:

    # Model Name                    , PATH/TO/MODELS  , EXTRA COMMANDS\nACCESS_ESM1-5-r1i1p1f1          , MODELS/r1i1p1f1 , CMIP6\nACCESS_ESM1-5-r2i1p1f1          , MODELS/r2i1p1f1 , CMIP6\nACCESS_ESM1-5-r3i1p1f1          , MODELS/r3i1p1f1 , CMIP6\n... (abbreviated)\n
    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#34-downloadlink-shapefiles-files-locally","title":"3.4 Download/link shapefiles files locally","text":"

    You can download the shapefiles that are needed to run ilamb and cartopy offline here:

    • For Land: https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/physical/ne_110m_land.zip
    • For Ocean: https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/physical/ne_110m_ocean.zip

    Finally, you need to define that path as CARTOPY_DATA_DIR via

    export CARTOPY_DATA_DIR=/absolute/path/to/shapefiles/directory\n

    Note: For NCI, we already provide shapefiles in a directory as part of project xp65. After joining the project, you can thus easily use

    export CARTOPY_DATA_DIR=/g/data/xp65/public/apps/cartopy-data\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#4-run-ilamb","title":"4. Run ilamb","text":""},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#41-ilamb-run","title":"4.1 ilamb-run","text":"

    Now that we have the configuration file set up, you can run the study using the ilamb-run script. Executing the following command at the $ILAMB_ROOT directory will run ilamb as specified in your sample.cfg for all models of the model_root and all regions (global) of the world:

    ilamb-run --config sample.cfg --model_root $ILAMB_ROOT/MODELS/ --regions global\n
    If you are on some institutional resource, you may need to launch the above command using a submission script, or request an interactive node. As the script runs, it will yield output which resembles the following:
    Searching for model results in /Users/ncf/sandbox/ILAMB_sample/MODELS/\n\n                                          CLM40cn\n\nParsing config file sample.cfg...\n\n                   SurfaceUpwardSWRadiation/CERES Initialized\n                                     Albedo/CERES Initialized\n\nRunning model-confrontation pairs...\n\n                   SurfaceUpwardSWRadiation/CERES CLM40cn              Completed  37.3 s\n                                     Albedo/CERES CLM40cn              Completed  44.7 s\n\nFinishing post-processing which requires collectives...\n\n                   SurfaceUpwardSWRadiation/CERES CLM40cn              Completed   3.3 s\n                                     Albedo/CERES CLM40cn              Completed   3.3 s\n\nCompleted in  91.8 s\n
    What happened here? First, the script looks for model results in the directory you specified in the --model_root option. It will treat each subdirectory of the specified directory as a separate model result. Here since we only have one such directory, CLM40cn, it found that and set it up as a model in the system. Next it parsed the configure file we examined earlier. We see that it found the CERES data source for both variables as we specified it. If the source data was not found or some other problem was encountered, the green Initialized will appear as red text which explains what the problem was (most likely MisplacedData). If you encounter this error, make sure that ILAMB_ROOT is set correctly and that the data really is in the paths you specified in the configure file.

    Next we ran all model-confrontation pairs. In our parlance, a confrontation is a benchmark observational dataset and its accompanying analsys. We have two confrontations specified in our configure file and one model, so we have two entries here. If the analysis completed without error, you will see a green Completed text appear along with the runtime. Here we see that albedo took a few seconds longer than rsus, presumably because we had the additional burden of reading in two datasets and combining them.

    The next stage is the post-processing. This is done as a separate loop to exploit some parallelism. All the work in a model-confrontation pair is purely local to the pair. Yet plotting results on the same scale implies that we know the maxmimum and minimum values from all models and thus requires the communcation of this information. Here, as we are plotting only over the globe and not extra regions, the plotting occurs quickly.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#42-run-specific-models-and-regions","title":"4.2 Run specific models and regions","text":"

    As mentioned in Section 3.3, you can adjust the models and regions that ilamb will run on. You can find more information in the ilamb tutorial. Calling ilamb-run with both specifications, would look like:

    ilamb-run --config cmip.cfg --model_setup modelroute.txt --regions regions.txt\n
    where you call a specific config file (see Section 3.2) as well as specific model routes and regions with files (see Section 3.3).

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#43-viewing-the-benchmarking-output-in-your-browser","title":"4.3 Viewing the benchmarking output in your browser","text":"

    The whole process generates a directory of results within ILAMB_ROOT which by default is called _build. To view the results locally on your computer, navigate into this directory and start a local http server:

    python -m http.server\n
    You should see a message similar to this (or use http://0.0.0.0:8000/):
    Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...\n
    Open this link in your browser and you will see a webpage with a summary table in the center. As we have so few variables and a single model at this point, the table will very simple:

    As we add more variables and models, this summary table helps you understand relative differences in scores among models. For now, clicking on a row of the table will expand it to reveal the underlying datasets used. Clicking on CERES will take you to another page which presents detailed scores and plots.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#5-run-ilamb-on-nci","title":"5. Run ilamb on NCI","text":"

    If you followed the guides above, you should be familiar with how you can setup ilamb.

    To run ilamb on NCI, you can either start an interactive setup Section 5.1 or use a non-interactive Portable Batch System (PBS) job Section 5.2.

    In both cases, you need to again define the variable $ILAMB_ROOT

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#51-ilamb_root-and-datamodel-on-nci","title":"5.1 ILAMB_ROOT and DATA/MODEL on NCI","text":"

    In our default setup, we will place ILAMB_ROOT and the shapefiles for cartopy directly in the home directory. First, you have to create the ILAMB_ROOT directory

    mkdir $PWD/ILAMB_ROOT\n
    You can then simply export their paths after login as:
    export ILAMB_ROOT=$PWD/ILAMB_ROOT\nexport CARTOPY_DATA_DIR=/g/data/xp65/public/apps/cartopy-data\n

    You can of course change the path of the directory, but will need to take this into account for the PBS job by adding a command to change into the $ILAMB_ROOT directory (see PBS setup comments).

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#ilamb_rootdata-on-nci","title":"ILAMB_ROOT/DATA on NCI","text":"

    An extensive colletion of DATA is provided in the kj13 project. You need to have joined the project on NCI to get access to this data.

    To create a symbolic link to this data, use the bash command

    ln -s /g/data/kj13/datasets/ilamb/DATA $ILAMB_ROOT/DATA\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#ilmab_rootmodel-on-nci","title":"ILMAB_ROOT/MODEL on NCI","text":"

    In the future, we will provide a symbolic link to a MODEL catalog for you as well.

    For now, you will need to create the directory $ILAMB_ROOT/MODEL and populate it with symbolic links to specific models yourself.

    In our example, we will use ACCESS-ESM1.5, which is provided on NCI as part of project fs38. You need to have joined the project on NCI to get access to this data.

    To link the ACCESS-ESM1.5 suite files to your $ILAMB_ROOT/MODEL, simply execute the following bash command

    mkdir $ILAMB_ROOT/MODELS\nln -s /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r* $ILAMB_ROOT/MODELS\n

    By the end of Section 5.1.1 and 5.1.2, your $ILAMB_ROOT Directory should look similar to

    $ILAMB_ROOT\n\u251c\u2500\u2500 ...\n\u251c\u2500\u2500 DATA -> /g/data/kj13/datasets/ilamb/DATA\n\u2514\u2500\u2500 MODELS\n    \u251c\u2500\u2500 r10i1p1f1 -> /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r10i1p1f1\n    \u251c\u2500\u2500 ... (abbreviated)\n    \u2514\u2500\u2500 r9i1p1f1 -> /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r9i1p1f1\n

    These different models have a lot of subdirectories, which are important to keep in mind when defining the source parameter in your ilamb .cfg file. Note that the ilamb files will end with *.nc*. For example, one of the *rsus* files for runr10i1p1f1can be found (and used for.cfg` under

    source = /g/data/fs38/publications/CMIP6/CMIP/CSIRO/ACCESS-ESM1-5/historical/r1i1p1f1/Amon/rsus/gn/files/d20191115/rsus_Amon_ACCESS-ESM1-5_historical_r1i1p1f1_gn_185001-201412.nc\n
    or shorter
    source = $ILAMB_ROOT/MODELS/r1i1p1f1/Amon/rsus/gn/files/d20191115/rsus_Amon_ACCESS-ESM1-5_historical_r1i1p1f1_gn_185001-201412.nc\n

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#52-portable-batch-system-pbs-jobs-on-nci","title":"5.2 Portable Batch System (PBS) jobs on NCI","text":"

    The following default PBS file, let's call it ilamb_test.sh can help you to setup your own, while making sure to use the correct project (#PBS -P) to charge your computing cost to:

    #!/bin/bash\n\n#PBS -N default_ilamb\n#PBS -P tm70\n#PBS -q normalbw\n#PBS -l ncpus=1\n#PBS -l mem=32GB           \n#PBS -l jobfs=10GB        \n#PBS -l walltime=00:10:00  \n#PBS -l storage=gdata/xp65+gdata/kj13+gdata/fs38\n#PBS -l wd\n\nmodule use /g/data/xp65/public/modules\nmodule load conda/access-med-0.1\n\nexport ILAMB_ROOT=$PWD/ILAMB_ROOT\nexport CARTOPY_DATA_DIR=/g/data/xp65/public/apps/cartopy-data\n\nilamb-run --config cmip.cfg --model_setup $PWD/modelroute.txt --regions global\n

    If you are not familiar with PBS jobs on NCI, you could find the guide here. In brief: this PBS script (which you can submit via the bash command qsub ilamb_test.sh), will submit a job to Gadi with the job name (#PBS -N) default_ilamb under project (#PBS -P) tm70 with a normal queue (#PBS -q normalbw), for 1 CPU (#PBS -l ncpus=1) with 32 GB RAM (#PBS -l mem=32GB), with an walltime of 10 hours and access to 10 GB local disk space (#PBS -l jobfs=10GB) as well as data storage access to projects xp65, kj13, and fs38 (again, note that you have to be member of both projects on NCI. Upon starting the job, it will change into to the working directory that you started the job from (#PBS -l wd) and load the access-med conda environment. Finally, it will export the $ILAMB_ROOT as well as $ARTOPY_DATA_DIR paths and start an ilamb-run.

    In our example, we actually run the cmip.cfg file from the ilamb config file github repository for files spec

    Note: If your ILAMB_ROOT and CARTOPY_DATA_DIR are not in your directory from where you submitted the job from, then you need to adjust the export commands to their path

    export ILAMB_ROOT=/absolute/path/where/ILAMB_ROOT/actually/is\nexport CARTOPY_DATA_DIR=/absolute/path/where/shapefiles/actually/are\n

    Once the jobs are finished, you can again inspect the outcome as described in Section 4.3

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_ilamb/#6-fix-your-setup-with-ilamb_doctor","title":"6. Fix your setup with ilamb_doctor","text":"

    ilamb_doctor is a script you can use to diagnosing some missing model values or what is incorrect or missing from a given analysis. It takes options similar to ilamb-run and is used in the following way: ```[ILAMB/test]$ ilamb-doctor --config test.cfg --model_root ${ILAMB_ROOT}/MODELS/CLM

    Searching for model results in /Users/ncf/ILAMB//MODELS/CLM

                                   CLM40n16r228\n                               CLM45n16r228\n                               CLM50n18r229\n

    We will now look in each model for the variables in the ILAMB configure file you specified (test.cfg). The color green is used to reflect which variables were found in the model. The color red is used to reflect that a model is missing a required variable.

                           Biomass/GlobalCarbon CLM40n16r228 biomass or cVeg\n                                    ... (abbreviated)\n                        Precipitation/GPCP2 CLM50n18r229 pr\n

    ``` Here we have run the command on some inputs in our test directory. You will see a list of the confrontations we run and the variables which are required or their synonyms. What is missing in this tutorial is the text coloring which will indicate if a given model has the required variables.

    We have finish the introduction of basic ilamb usage. We believe you have some understanding of ilamb and cont wait to use it. if you still have any question or you want some developer level support, you can find more detail in their official tutorial.

    "},{"location":"model_evaluation/model_evaluation_on_gadi/model_evaluation_on_gadi_pangeo_cosima/","title":"Tutorial for using pangeo & COSIMA on Gadi@NCI","text":"

    https://pangeo.io

    "},{"location":"models/","title":"ACCESS Models","text":"

    ACCESS is a family of related computer models or components that represent different parts of the Earth system. ACCESS links various model components through software called couplers to form different Model Configurations.

    "},{"location":"models/#access-model-components","title":"ACCESS Model Components","text":"Atmosphere Land Ocean Sea Ice Aerosols Atmospheric Chemistry Biogeochemistry Land Biogeochemistry Ocean Coupler"},{"location":"models/#access-model-configurations","title":"ACCESS Model Configurations","text":"ACCESS-CM ACCESS Coupled Model (CM) produces physical climate simulations by deploying the atmosphere, ocean, and sea-ice components. ACCESS-CM features improved fluid dynamics and a microphysical aerosol scheme. ACCESS-ESM ACCESS Earth System Model (ESM) simulates the carbon and other bio-chemical cycles within the climate system, by deploying the atmosphere, ocean, and sea-ice components. ACCESS-ESM is one of the two ACCESS global coupled model versions. ACCESS-OM ACCESS Ocean Model (OM) deploys the ocean and sea-ice components to provide the Australian climate community with ocean weather and climate data, including seasonal forecasting, climate variability, downscaling of climate in the marine environment around Australia, and ocean biogeochemistry."},{"location":"models/configurations/","title":"ACCESS Configurations","text":"

    The Configurations section is still in development.

    What to expect in the next few months?

    The ACCESS-NRI will release model configurations and experiments to the community, including a reference model output for each experiment.

    The model configurations and experiments will be documented in this section as they are released by the ACCESS-NRI. The information will cover topics such as where to find a configuration or experiment, how to use a configuration to run your own experiment, where to find the data produced by a released experiment.

    "},{"location":"models/configurations/access-am/","title":"ACCESS-AM {{ supported }}","text":"

    The ACCESS-AM model is a coupled model between the atmosphere and the land. The atmospheric model component is the UM model. The UM model comes by default coupled to the JULES land model. That is why the first configurations and experiments released of ACCESS-AM will be UM-JULES configurations. But the ACCESS-NRI is working to ensure subsequent releases of ACCESS-AM use the CABLE land model instead.

    "},{"location":"models/configurations/access-am/#getting-started-information","title":"Getting started information","text":"

    On this page, you will find information on how to gain access to the UM model and start using the model. You will also find links to various configurations and experiments you can use as a basis to design your experiment.

    "},{"location":"models/configurations/access-am/#configurations","title":"Configurations","text":""},{"location":"models/configurations/access-am/#experiments","title":"Experiments","text":"

    Some experiments already run with the UM are listed on:

    • CLEX CMS wiki
    "},{"location":"models/configurations/access-cm/","title":"ACCESS-CM {{ supported }}","text":"ACCESS-CM2 (ACCESS Coupled Model 2) is a global fully-coupled climate model that includes the atmosphere, ocean and sea-ice components, and produces physical climate simulations. ACCESS-CM2 is one of the two models run by the Australian climate community for the Coupled Model Intercomparison Project, CMIP."},{"location":"models/configurations/access-cm/#access-cm2-configurations","title":"ACCESS-CM2 configurations","text":"
    • Atmosphere model (UM10.6): N96 resolution (1.875\u00b0 x 1.25\u00b0, 85 levels). Physical model only \u2013 no carbon cycle.

    • Land surface model (CABLE2.5)

    • Ocean model (MOM5): Tripolar grid, 1\u00b0 resolution, 50 levels.

    • Sea ice model (CICE5.1)

      COMPONENT MODEL VERSION Atmosphere UM 10.6 Land Surface CABLE 2.5 (integrated in UM) Ocean MOM 5 Sea Ice CICE 5.1 Coupler OASIS-MCT 3

    ACCESS-NRI will release an ACCESS-CM model configuration. The first release of ACCESS-CM will be derived from the CSIRO ACCESS-CM2 configuration and will include atmosphere, land, ocean and sea ice components.

    "},{"location":"models/configurations/access-cm/#access-cm2-recommended","title":"ACCESS-CM2 {{ recommended }}","text":"

    Citation 1 | Tutorial

    ACCESS-CM2 1 is one of Australia\u2019s contributions to the World Climate Research Programme\u2019s Coupled Model Intercomparison Project Phase 6 (CMIP6). The component models are: UM10.6 GA7.1 for the atmosphere, CABLE2.5 for the land surface, MOM5 for the ocean, and CICE5.1.2 for the sea ice. Compared to previous model versions ACCESS-CM2 shows better global hydrological balance, more realistic ocean water properties (in terms of spatial distribution) and meridional overturning circulation in the Southern Ocean but a poorer simulation of the Antarctic sea ice and a larger energy imbalance at the top of atmosphere. This energy imbalance reflects a noticeable warming trend of the global ocean over the spin-up period.

    1. Daohua Bi, Martin Dix, Simon Marsland, Siobhan O'Farrell, Arnold Sullivan, Roger Bodman, Rachel Law, Ian Harman, Jhan Srbinovsky, Harun A Rashid, Peter Dobrohotoff, Chloe Mackallah, Hailin Yan, Anthony Hirst, Abhishek Savita, Fabio Boeira Dias, Matthew Woodhouse, Russell Fiedler, and Aidan Heerdegen. Configuration and spin-up of ACCESS-CM2, the new generation Australian Community Climate and Earth System Simulator coupled model. Journal of Southern Hemisphere Earth Systems Science, 70(1):225\u2013251, 2020.\u00a0\u21a9\u21a9

    "},{"location":"models/configurations/access-esm/","title":"ACCESS-ESM {{ supported }}","text":"

    ACCESS-ESM stands for ACCESS Earth System Model. Earth system model means it is a fully-coupled model that includes carbon cycle components.

    ACCESS-NRI will release an ACCESS-ESM model configuration. The first release of ACCESS-ESM will be derived from the CSIRO ACCESS-ESM1.5 configuration and will include atmosphere, land and land biogeochemistry, ocean and ocean biogeochemistry, and sea ice components.

    "},{"location":"models/configurations/access-esm/#access-esm15-recommended","title":"ACCESS-ESM1.5 {{ recommended }}","text":"

    Citation 1

    ACCESS Training Workshop (AMOS 2021)

    Webinar: Getting Started with ACCESS-CM2 and ACCESS-ESM1.5

    ACCESS-ESM1.5 1 is a fully-coupled climate model with land and ocean carbon cycle components. ACCESS-ESM1.5 has mainly been developed to enable Australia to participate in the Coupled Model Intercomparison Project Phase 6 (CMIP6) with an ESM version. An assessment of the climate response to CO2 forcing indicates that ACCESS-ESM1.5 has an equilibrium climate sensitivity of 3.87\u00b0C.

    1. Tilo Ziehn, Matthew A Chamberlain, Rachel M Law, Andrew Lenton, Roger W Bodman, Martin Dix, Lauren Stevens, Ying-Ping Wang, and Jhan Srbinovsky. The Australian Earth System Model: ACCESS-ESM1.5. Journal of Southern Hemisphere Earth Systems Science, 70(1):193\u2013214, 2020.\u00a0\u21a9\u21a9

    "},{"location":"models/configurations/access-om/","title":"ACCESS-OM {{ supported }}","text":"

    The ACCESS Ocean Model, ACCESS-OM, is a global coupled ocean and sea ice configuration. It couples the ocean and sea ice components via a coupler. The atmospheric fields that drive the model are provided by a data product, usually derived from reanalysis.

    ACCESS-NRI will release supported ACCESS-OM configurations. The first release will be derived from the COSIMA ACCESS-OM2 suite and will include ocean and sea ice components.

    "},{"location":"models/configurations/access-om/#access-om2-recommended","title":"ACCESS-OM2 {{ recommended }}","text":"

    Citation 1 | Documentation

    ACCESS-OM2 1 is a suite of coupled ocean-sea ice models developed by the Consortium for Ocean-Sea Ice Modelling in Australia (COSIMA). All models use the MOM5 ocean model coupled to the CICE5 sea ice model via an OASIS3-MCT coupler.

    The models in the ACCESS-OM2 suite differ by their grid spatial resolution:

    • ACCESS-OM2 at 1\u00b0 with 50 vertical levels
    • ACCESS-OM2-025 at 0.25\u00b0 with 50 vertical levels
    • ACCESS-OM2-01 at 0.1\u00b0 with 75 vertical levels
    1. A. E. Kiss, A. McC. Hogg, N. Hannah, F. Boeira Dias, G. B. Brassington, M. A. Chamberlain, C. Chapman, P. Dobrohotoff, C. M. Domingues, E. R. Duran, M. H. England, R. Fiedler, S. M. Griffies, A. Heerdegen, P. Heil, R. M. Holmes, A. Klocker, S. J. Marsland, A. K. Morrison, J. Munroe, M. Nikurashin, P. R. Oke, G. S. Pilo, O. Richet, A. Savita, P. Spence, K. D. Stewart, M. L. Ward, F. Wu, and X. Zhang. ACCESS-OM2 v1.0: a global ocean\u2013sea ice model at three resolutions. Geoscientific Model Development, 13(2):401\u2013442, 2020. URL: https://gmd.copernicus.org/articles/13/401/2020/, doi:10.5194/gmd-13-401-2020.\u00a0\u21a9\u21a9

    "},{"location":"models/configurations/access-s/","title":"ACCESS-S {{ community }}","text":"

    ACCESS-S is the Bureau of Meteorology's climate modelling system used for seasonal forecasting.

    This coupled model uses a different set of model components than the other ACCESS models:

    • UM for the atmosphere
    • JULES for the land
    • NEMO for the ocean
    • CICE for the sea-ice
    • OASIS3-MCT for the coupler
    "},{"location":"models/model_components/","title":"Model Components","text":"

    ACCESS components represent different chemical, physical or biological parts of the Earth System.

    Atmosphere Land Ocean Sea Ice Aerosols Atmospheric Chemistry Biogeochemistry Land Biogeochemistry Ocean Coupler

    Most model components have originated from collaborations with international research groups. These include:

    • UK Met Office \u2192 Unified Model (UM) atmospheric component.
    • NOAA/ Geophysical Fluid Dynamics Laboratory (GFDL) \u2192 Modular Ocean Model (MOM) component.
    • Los Alamos National Laboratory (LANL) \u2192 Sea ice model (CICE) component.
    • Centre Europ\u00e9en de Recherche et de Formation Avanc\u00e9e en Calcul Scientifique (CERFACS) \u2192 Ocean-Atmosphere-Sea Ice-Snowpack Model Coupling Toolkit (OASIS3-MCT).
    • United Kingdom Chemistry and Aerosols (UKCA) \u2192 UK community atmospheric Chemistry-Aerosol (UKCA) model.
    • CSIRO, COSIMA and CLEX \u2192 Community Atmosphere Biosphere Land Exchange (CABLE), Whole Ocean Model with Biogeochemistry And Trophic-dynamics (WOMBAT) and land biogeochemistry Carnegie\u2013Ames\u2013Stanford-Approach (CASA) models. These models have been developed in Australia.
    "},{"location":"models/model_components/aerosols_atmospheric_chemistry/","title":"Aerosol and Atmospheric Chemistry Components","text":""},{"location":"models/model_components/aerosols_atmospheric_chemistry/#ukca-supported","title":"UKCA {{ supported }}","text":"

    The UK Chemistry-Aerosol model (UKCA) is a community atmospheric chemistry-aerosol global model developed in the United Kingdom. It is suitable for a range of topics in climate and environmental change research.

    "},{"location":"models/model_components/aerosols_atmospheric_chemistry/#how-is-ukca-used","title":"How is UKCA used?","text":"

    UKCA chemistry model is enabled in ACCESS-CM2-Chem.

    "},{"location":"models/model_components/aerosols_atmospheric_chemistry/#glomap-supported","title":"GLOMAP {{ supported }}","text":"

    UKCA contains an aerosol scheme GLObal Model of Aerosol Processes (GLOMAP) that can be used independently. The multi-component, multi-modal GLOMAP model allows simulation of aerosol number, size and concentrations of individual components such as sulphate,sea salt and different types of carbon.

    "},{"location":"models/model_components/aerosols_atmospheric_chemistry/#how-is-glomap-used","title":"How is GLOMAP used?","text":"

    GLOMAP is used in ACCESS-CM2 and ACCESS-CM2-Chem.

    "},{"location":"models/model_components/atmosphere/","title":"Atmospheric Model Component","text":""},{"location":"models/model_components/atmosphere/#the-unified-model-um-supported","title":"The Unified Model (UM) {{ supported }}","text":"

    The Unified Model (UM) is a numerical model of the atmosphere used for both weather and climate applications, developed by the Met Office in the United Kingdom (UK). It includes solutions of the equations of atmospheric fluid dynamics with advanced parameterizations of subgrid-scale physical processes like convection, cloud formation and atmospheric radiation.

    The Unified Model gets its name because a single model is used across a wide range of both timescales (nowcasting to centennial) and spatial scales (sub km convective scale to global climate modelling).

    The UM is used by several international operational meteorology and research organizations and these contribute towards its development through the UM partnership.

    "},{"location":"models/model_components/atmosphere/#how-is-the-um-used","title":"How is the UM used?","text":"

    The UM Model component represents the atmosphere in many of the ACCESS Models used at regional and global scales.

    The ACCESS-CM2 climate model and ACCESS-ESM1-5 earth system model use versions of the UM as their atmospheric components.

    The Australian Bureau of Meteorology operational 12 km spatial resolution global forecasting system uses the Unified Model, as part of ACCESS for:

    • Forecasting of extreme events and emergencies such as heatwaves, bushfires, cyclones, floods, coral bleaching, sea-level rise, coastal inundation and more.
    • Daily and seasonal weather forecasts
    "},{"location":"models/model_components/atmosphere/#useful-links","title":"Useful links","text":"

    STASH register: Metadata reference for the outputs variables.

    "},{"location":"models/model_components/bgc_land/","title":"Biogeochemistry Land","text":""},{"location":"models/model_components/bgc_land/#casa-cnp-supported","title":"CASA-CNP {{ supported }}","text":"

    CASA (Carnegie-Ames-Stanford Approach)-CNP (Carbon-Nitrogen-Phosphorous) is the land biogeochemistry model developed in CABLE.

    CASA-CNP models the dynamics of carbon pools and nitrogen and phosphorous limitations. It is directly coupled with the CABLE land surface model.

    "},{"location":"models/model_components/bgc_land/#how-is-casa-cnp-used","title":"How is CASA-CNP used?","text":"

    CASA-CNP is switched on for carbon-cycle to use in the ACCESS-ESM1.5 model.

    "},{"location":"models/model_components/bgc_ocean/","title":"Biogeochemistry Ocean","text":""},{"location":"models/model_components/bgc_ocean/#wombat-supported","title":"WOMBAT {{ supported }}","text":"

    WOMBAT is the ocean carbon model (World Ocean Model of Biogeochemistry And Trophic-dynamics), developed in Australia. It calculates the carbon fluxes of the ocean, by simulating the evolution of phosphate, oxygen, dissolved inorganic carbon, alkalinity and iron with one class of phytoplankton and zooplankton.

    WOMBAT is a Nutrient, Phytoplankton, Zooplankton and Detritus (NPZD) model, with one zooplankton and one phytoplankton class.

    "},{"location":"models/model_components/bgc_ocean/#how-is-wombat-used","title":"How is WOMBAT used?","text":"

    WOMBAT is coupled to the MOM5 ocean model in the ACCESS-ESM1.5 and ACCESS-OM2 models.

    "},{"location":"models/model_components/coupler/","title":"Coupler {{ supported }}","text":"

    A coupler is a software package that allows synchronised exchanges of coupling information between numerical codes representing different components of the climate system.

    "},{"location":"models/model_components/coupler/#oasis3-mct-supported","title":"OASIS3-MCT {{ supported }}","text":"

    OASIS3-MCT is the version of the Ocean Atmosphere Sea Ice Soil (OASIS) coupler interfaced with the Model Coupling Toolkit (MCT) from the Argonne National Laboratory. OASIS3-MCT is the coupler used in the configurations:

    • ACCESS-ESM1.5
    • ACCESS-CM2
    • ACCESS-OM2
    • ACCESS-S
    "},{"location":"models/model_components/coupler/#nuopc-interoperability-layer-recommended","title":"NUOPC Interoperability Layer {{ recommended }}","text":"

    NUOPC (National Unified Operational Prediction Capability) Interoperability Layer defines conventions and a set of generic components for building coupled models using the Earth System Modeling Framework (ESMF).

    ACCESS-OM3, a configuration currently under development, uses NUOPC to couple its MOM6 and CICE6 model components as there are no respective OASIS coupling interfaces for these components.

    "},{"location":"models/model_components/land/","title":"Land Model Components","text":""},{"location":"models/model_components/land/#cable-supported","title":"CABLE {{ supported }}","text":"

    Community Atmosphere Biosphere Land Exchange (CABLE) is a land surface model, used to calculate the fluxes of momentum, energy, water and carbon between the land surface and the atmosphere. It also models the main biogeochemical cycles of the land ecosystem when used in conjunction with the CASA-CNP module.

    "},{"location":"models/model_components/land/#how-is-cable-used","title":"How is CABLE used?","text":"

    CABLE can be run as a standalone model, for a single location, a region or globally. Coupled to the Met Office Unified Model (UM), CABLE provides the land surface component of the ACCESS Earth System Model (ACCESS-ESM) and ACCESS Coupled Model (ACCESS-CM).

    CABLE is an open source model developed by a community of Australian climate science researchers. Registration is required to access the CABLE code repository.

    "},{"location":"models/model_components/land/#jules-supported","title":"JULES {{ supported }}","text":"

    The Joint UK Land Environment System (JULES) is a community land surface model that can be used both as a standalone model and as the land surface component in the UM model. By modelling different land surface processes (surface energy balance, hydrological cycle, carbon cycle, dynamic vegetation, etc.) and their interaction with each other, JULES provides a framework to assess the impact of modifying a particular process on the ecosystem as a whole, e.g., the impact of climate change on hydrology.

    "},{"location":"models/model_components/ocean/","title":"Ocean Model Component","text":""},{"location":"models/model_components/ocean/#modular-ocean-model-mom-supported","title":"Modular Ocean Model (MOM) {{ supported }}","text":"

    The Modular Ocean Model (MOM) is one of the ocean components of the ACCESS climate model system. Used to simulate ocean currents at both regional and global scales, MOM is an invaluable tool for studying the global ocean climate system, as well as capabilities for regional and coastal applications.

    "},{"location":"models/model_components/ocean/#mom5-supported","title":"MOM5 {{ supported }}","text":"

    Source Code

    MOM5 is used in supported climate ACCESS configurations.

    "},{"location":"models/model_components/ocean/#mom6-recommended","title":"MOM6 {{ recommended }}","text":"

    Source Code | Tutorials

    The most recent version, MOM6, is an open source development by a consortium of scientists across several government agencies and academic institutions with critical contributions provided by researchers worldwide.

    "},{"location":"models/model_components/sea-ice/","title":"Sea-Ice Model Component","text":""},{"location":"models/model_components/sea-ice/#cice-supported","title":"CICE {{ supported }}","text":"

    CICE is a numerical model for simulating the growth, melting and movement of polar sea ice. This software package was developed by researchers at Los Alamos National Laboratory team and is currently managed by the CICE Consortium, an international group of institutions formed to maintain and develop CICE in the public domain.

    CICE5 is the current version used in ACCESS model configurations.

    CICE6 is currently under development.

    "},{"location":"models/run-a-model/","title":"Running a Model","text":"

    Here, we are providing the information to run different ACCESS models.

    If Model, Model Component or Model Configuration are not familiar terms for you, please check out our Model overview.

    If you have not run a model before, our Getting Started Guide will give you the basics to access the Model infrastructure on the high-performance-computing facility Gadi@NCI.

    Detailed guides for the different Model configurations can then be found on the following pages: - Run ACCESS-ESM for the ACCESS Earth System Model configurations - Run ACCESS-CM for the ACCESS Coupled Model configurations - Run ACCESS-AM for the ACCESS Atmosphere Model configurations - Run ACCESS-OM for the ACCESS Ocean Model configurations

    "},{"location":"models/run-a-model/run-access-cm/","title":"Running ACCESS-CM2 Model","text":"

    This section includes a step-by-step instruction set on how to run the ACCESS-CM2 suite.

    It is also built as a future point of reference, where you can come back and find the section containing the information you need.

    "},{"location":"models/run-a-model/run-access-cm/#getting-started","title":"Getting Started","text":"
    • An institutional email address with an organisation that allows access to NCI (e.g., CSIRO, a university, etc.).
    • Access to NCI compute/storage.
    • A Linux/Mac/Unix computer with an internet connection and a command line terminal (e.g., MacOS with XQuartz and command line tools installed, or Putty Cygwin/MobaXterm/similar X-Windows compatible program on a PC).
    "},{"location":"models/run-a-model/run-access-cm/#requirements-for-running-access-cm-suites","title":"Requirements for running ACCESS-CM suites","text":"

    Here, we assume that you already have access to Gadi, the supercomputer hosted by the National Computational Infrastructure (NCI). If needed, you can go back to our guide on how to get access to Gadi.

    "},{"location":"models/run-a-model/run-access-cm/#basic-setup","title":"Basic Setup","text":"

    To run an ACCESS-CM suite, new users will need to:

    • Join the ACCESS group. You can also find instructions on how to join a particular project through the NCI self-service portal.
    • Connect to accessdev to complete your setup once you have your NCI credentials and are a member of the ACCESS group. Note: At present, both accessdev and ARE run the models on Gadi. However, ARE only supports shorter-running suites (i.e., runs less than 48 hours). Work is currently in progress to fully transition the cylc workflows from accessdev virtual machine to the ARE.
    • Additional steps relating to the communication between accessdev and Gadi may also be necessary.
    "},{"location":"models/run-a-model/run-access-cm/#uk-met-office-environment-on-nci","title":"UK Met Office Environment on NCI","text":"

    As components within the ACCESS-CM suites use the UK Met Office model code, the UK Met Office Environment is installed on NCI. This comprises the model software and tools as well as the cylc workflow system, rose suites, the Met Office MOSRS repository and our local replica repository. In order to checkout and run ACCESS-CM2 suites on Gadi using Rose/Cylc, you need to have access to a number of repositories at the Met Office as well as the local replica and local software on NCI, which will require fullfilling these prerequisites.

    "},{"location":"models/run-a-model/run-access-cm/#met-office-science-repository-service-mosrs","title":"Met Office Science Repository Service (MOSRS)","text":"

    Met Office Science Repository Service (MOSRS) is a Trac server run by the UK Met Office for sharing code and configurations for the climate models it runs with partners. It contains the source code and configurations for the UM and JULES amongst other things.

    To apply for a MOSRS account, you should contact your local institutional sponsor.

    "},{"location":"models/run-a-model/run-access-cm/#preparing-to-run-an-access-cm-suite","title":"Preparing to run an ACCESS-CM suite","text":"

    At this stage, you should be able to connect to accessdev and Gadi.

    accessdev is a frontend system where you prepare ACCESS jobs and then submit them to Gadi (the supercomputer at NCI where ACCESS is run).

    "},{"location":"models/run-a-model/run-access-cm/#logging-in-to-gadi-and-accessdev","title":"Logging in to Gadi and accessdev","text":"

    To run an ACCESS-CM2 suite (i.e., job), you need to first login to Gadi with your username through a login node.

    ssh -Y username@gadi.nci.org.au

    Similarly, to login to accessdev:

    ssh -Y $USER@accessdev.nci.org.au

    Aliases and shortcuts can be created to simplify these commands by configuring SSH.

    "},{"location":"models/run-a-model/run-access-cm/#copy-edit-and-run-an-access-cm2-suite","title":"Copy, Edit, and Run an ACCESS-CM2 suite","text":"ACCESS-CM2 is a set of submodels (eg. UM, MOM, CICE, CABLE, OASIS) with a range of model parameters, input data, and computer related information, that need to be packaged together as a suite in order to run. Each ACCESS-CM2 suite has an ID, in the format u-<suite-name>, with <suite-name> being a unique identifier (e.g. u-br565 is the CMIP6 release preindustrial experiment suite). Typically, an existing suite is copied and then edited as needed for a particular run."},{"location":"models/run-a-model/run-access-cm/#copy-access-cm2-suites-with-rosie","title":"Copy ACCESS-CM2 suites with Rosie","text":"Rosie is an SVN repository wrapper with a set of options to work with ACCESS-CM2 suites. To copy an existing suite, on accessdev:
    1. Run mosrs-auth to authenticate using your MOSRS credentials: mosrs-auth Please enter the MOSRS password for <MOSRS-username>: Successfully authenticated with MOSRS as <MOSRS-username>
    2. Run rosie checkout <suite-ID> to create a local copy of the <suite-ID> from the UKMO repository (used mostly for testing and examining existing suites): rosie checkout <suite-ID> [INFO] create: /home/565/<$USER>/roses [INFO] <suite-ID>: local copy created at /home/565/<$USER>/roses/<suite-ID> Alternatively, run rosie copy <suite-ID> to create a new full copy (local and remote in the UKMO repository) rather than just a local copy. When a new suite is created in this way, a new unique name is generated within the repository, and populated with some descriptive information about the suite along with all the initial configuration details: rosie copy <suite-ID> Copy \"<suite-ID>/trunk@<trunk-ID>\" to \"u-?????\"? [y or n (default)] y [INFO] <new-suite-ID>: created at https://code.metoffice.gov.uk/svn/roses-u/<suite-n/a/m/e/> [INFO] <new-suite-ID>: copied items from <suite-ID>/trunk@<trunk-ID> [INFO] <suite-ID>: local copy created at /home/565/<$USER>/roses/<new-suite-ID>
    For additional rosie options, run rosie help. The suites are created in the user's accessdev home directory, under ~/roses/<suite-ID>. The suite directory usually contains 2 subdirectories and 3 files:
    • app \u2192 directory containing the configuration files for the various tasks within the suite.
    • meta \u2192 directory containing the GUI metadata.
    • rose-suite.conf \u2192 the main suite configuration file.
    • rose-suite.info \u2192 suite information file.
    • suite.rc \u2192 the Cylc control script file (Jinja2 language).
    • ls ~/roses/<suite-ID> app meta rose-suite.conf rose-suite.info suite.rc
    "},{"location":"models/run-a-model/run-access-cm/#edit-an-access-cm2-suite-configuration-with-rose-gui","title":"Edit an ACCESS-CM2 suite configuration with Rose GUI","text":"Rose is a configuration editor which can be used to view, edit, or run an ACCESS-CM2 suite. To edit a suite configuration, on accessdev:
    1. Run rose edit & (the & is optional and keeps the terminal prompt active while runs the GUI as a separate process) from inside the relevant suite directory (e.g. ~/roses/<suite-ID>) to open the Rose GUI and inspect the suite information. cd ~/roses/<suite-ID> rose edit & [<N>] <PID>
    2. There are many settings that can be changed in a Rose GUI. However, there are a few that we definitely want to check and edit before we run a suite:
      • NCI Project To make sure we run the suite under the NCI project we belong to, we can navigate to suite conf \u2192 Machine and Runtime Options, edit the Compute project field, and click the Save button . (Check how to connect to a project if you have not joined one yet). If, for example, we belong to the tm70 Project (ACCESS-NRI), we will insert tm70 in the Compute project field:
      • Total Run length / Cycling frequency ACCESS-CM2 suites are often run in multiple steps, each of them constituting a cycle, with the job scheduler resubmitting the suite every chosen Cycling frequency, until the Total Run length is met. To modify these parameters, we can navigate to suite conf \u2192 Run Initialisation and Cycling, edit the respective fields, and click the Save button . The values are in the ISO 8601 Duration format. If, for example, we want to run the suite for a total of 50 Years, and resubmit every year, we will change Total Run length to P50Y and Cycling frequency to P1Y. Note that the current maximum Cycling frequency is 2 years:
      • Wallclock time The Wallclock time is the time requested by the PBS job to run a single cycle. If this time is not enough for the suite to end its cycle, our job will be terminated before the suite can complete the run. If we change the Cycling frequency, we might need to change the Wallclock time accordingly. The time needed for the suite to run a full cycle depends on several factors, but a good estimation can be 4 hours per simulated year. To modify the Wallclock time, we can navigate to suite conf \u2192 Run Initialisation and Cycling, edit the respective field, and click the Save button . The value is in the ISO 8601 Duration format.
    "},{"location":"models/run-a-model/run-access-cm/#run-an-access-cm2-suite","title":"Run an ACCESS-CM2 suite","text":"After completing all the modifications to the suite, we are ready to run it. ACCESS-CM2 suites run on Gadi through a PBS job submission. When the suite gets run, its configuration files are copied on Gadi under /scratch/$PROJECT/$USER/cylc-run/<suite-ID>, and a symbolic link to this folder is also created in the $USER's home directory under ~/cylc-run/<suite-ID>. An ACCESS-CM2 suite is constituted by several tasks (such as checking out code repositories, compiling and building the different model components, running the model, etc.). The workflow of these tasks is controlled by Cylc. Cylc (pronounced \u2018silk\u2019), is a workflow manager that automatically executes tasks according to the model main cycle script suite.rc. Cylc deals with how the job will be run and manages the time steps of each submodel, as well as monitoring all the tasks and reporting any error that might occur. To run an ACCESS-CM2 suite, on accessdev:
    1. Run rose suite-run, from inside the suite directory, to run the initial tasks.
    2. After these few small tasks get executed, the Cylc GUI will open up and you will be able to see and control all the different tasks in the suite as they are run.
    3. cd ~/roses/<suite-ID> rose suite-run [INFO] export CYLC_VERSION=7.8.3 [INFO] export ROSE_ORIG_HOST=accessdev.nci.org.au [INFO] export ROSE_SITE= [INFO] export ROSE_VERSION=2019.01.2 [INFO] create: /home/565/<$USER>/cylc-run/<suite-ID> [INFO] create: log.<timestamp> [INFO] symlink: log.<timestamp> <= log [INFO] create: log/suite [INFO] create: log/rose-conf [INFO] symlink: rose-conf/<timestamp>-run.conf <= log/rose-suite-run.conf [INFO] symlink: rose-conf/<timestamp>-run.version <= log/rose-suite-run.version [INFO] install: rose-suite.info \u2003\u2003\u2003\u2003source: /home/565/<$USER>/roses/<suite-ID>/rose-suite.info [INFO] create: app [INFO] install: app \u2003\u2003\u2003\u2003source: /home/565/<$USER>/roses/<suite-ID>/app [INFO] create: meta [INFO] install: meta \u2003\u2003\u2003\u2003source: /home/565/<$USER>/roses/<suite-ID>/meta [INFO] install: suite.rc [INFO] REGISTERED <suite-ID> -> /home/565/<$USER>/cylc-run/<suite-ID> [INFO] create: share [INFO] install: share [INFO] create: work [INFO] chdir: log/ [INFO] \u2003\u2003\u2003\u2003\u2003\u2003\u2009\u2009._. [INFO] \u2003\u2003\u2003\u2003\u2003\u2003\u2009\u2009| |\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003The Cylc Suite Engine [7.8.3] [INFO] ._____._. ._| |_____.\u2003\u2003\u2003\u2003\u2003\u2009Copyright (C) 2008-2019 NIWA [INFO] | .___| | | | | .___|\u2003& British Crown (Met Office) & Contributors. [INFO] | !___| !_! | | !___. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [INFO] !_____!___. |_!_____! This program comes with ABSOLUTELY NO WARRANTY; [INFO] \u2003\u2003\u2003\u2009.___! | \u2003\u2003\u2003\u2003\u2003see `cylc warranty`. \u2009It is free software, you [INFO] \u2003\u2003\u2003\u2009!_____! \u2003\u2003\u2003\u2003\u2003\u2009are welcome to redistribute it under certain [INFO] [INFO] *** listening on https://accessdev.nci.org.au:<port>/ *** [INFO] [INFO] To view suite server program contact information: [INFO] $ cylc get-suite-contact <suite-ID> [INFO] [INFO] Other ways to see if the suite is still running: [INFO] $ cylc scan -n '<suite-ID>' accessdev.nci.org.au [INFO] $ cylc ping -v --host=accessdev.nci.org.au <suite-ID> [INFO] $ ps -opid,args <PID> # on accessdev.nci.org.au TO DO --> Add Rose GUI image
    You are done!! If you don't get any errors, you will be able to check the suite output files after the run is complete. Note that, at this stage, it is possible to close the Cylc GUI. If you want to open it again, just run rose suite-gcontrol from inside the suite directory."},{"location":"models/run-a-model/run-access-cm/#check-for-errors","title":"Check for errors","text":"It is quite common, especially during the first few runs, to experience errors and job failures. An ACCESS-CM2 suite is constituted by several tasks, and each of these tasks could fail. When a task fails, the suite is halted and you will see a red icon next to the respective task name in the Cylc GUI. To investigate the cause of a failure, we need to look at the logs (job.err and job.out) from the suite run. There are two main ways to do so:
    • Using the Cylc GUI Right-click on the task that failed and click on View Job Logs (Viewer) \u2192 job.err or job.out. To access the specific task you might have to click on the arrow next to the task, to extend the drop-down menu with all the sub-taks.
    • In the ~/cylc-run/<suite-ID> directory The suite logs directories are stored inside ~/cylc-run/<suite-ID> as log.<TIMESTAMP>, with the lastest set of logs also symlinked in the ~/cylc-run/<suite-ID>/log directory. The logs for the main job are inside the ~/cylc-run/<suite-ID>/log/job directory. Logs are separated into simulation cycles through their starting dates, and then differentiated by task. They are then further separated into \"attempts\" (consecutive failed/successful tasks), with NN being a symlink to the most recent attempt. In our example, the failure occurred for the 09500101 simulation cycle (starting date on 1st January 950) in the coupled task. Therefore, the directory where to find the job.err and job.out files is ~/cylc-run/<suite-ID>/log/job/09500101/coupled/NN. cd ~/cylc-run/<suite-ID> ls app cylc-suite.db log log.20230530T051952Z meta rose-suite.info share suite.rc suite.rc.processed work cd log ls db job rose.conf rose-suite-run.conf rose-suite-run.locs rose-suite-run.log rose-suite-run.version suite suiterc cd job ls 09500101 cd 09500101 ls coupled fcm_make2_um fcm_make_um install_warm make2_mom make_mom fcm_make2_drivers fcm_make_drivers install_ancil make2_cice make_cice cd coupled ls 01 02 03 NN cd NN ls job job-activity.log job.err job.out job.status
    "},{"location":"models/run-a-model/run-access-cm/#stop-restart-and-reload-suites","title":"Stop, restart and reload suites","text":"Sometimes, you may want to control the running state of a suite. If your Cylc GUI has been closed and you are unsure whether your suite is still running, you can scan for active suites and reopen the GUI if desired. To scan for active suites run cylc scan. To reopen the Cylc GUI there are 2 main ways:
    • run rose suite-gcontrol from inside the suite directory
    • OR
    • run gcylc <suite-ID>
    cylc scan <suite-ID> <$USER>@accessdev.nci.org.au:<port> cd ~/roses/<suite-ID> rose suite-gcontrol Add Rose GUI image -->"},{"location":"models/run-a-model/run-access-cm/#stop-a-suite","title":"STOP a suite","text":"Run rose suite-stop -y, from inside the suite directory, to shutdown a suite in a safe manner."},{"location":"models/run-a-model/run-access-cm/#restart-a-suite","title":"RESTART a suite","text":"There are two main ways to restart a suite:
    • 'SOFT' restart Run rose suite-run --restart, from inside the suite directory, to re-install the suite and reopen Cylc in the same state as when it was stopped (you may need to manually trigger failed tasks from the Cylc GUI). cylc cd ~/roses/<suite-ID> rose suite-run --restart [INFO] export CYLC_VERSION=7.8.3 [INFO] export ROSE_ORIG_HOST=accessdev.nci.org.au [INFO] export ROSE_SITE= [INFO] export ROSE_VERSION=2019.01.2 [INFO] delete: log/rose-suite-run.conf [INFO] symlink: rose-conf/<timestamp>-restart.conf <= log/rose-suite-run.conf [INFO] delete: log/rose-suite-run.version [INFO] symlink: rose-conf/<timestamp>-restart.version <= log/rose-suite-run.version [INFO] chdir: log/ [INFO] \u2003\u2003\u2003\u2003\u2003\u2003\u2009\u2009._. [INFO] \u2003\u2003\u2003\u2003\u2003\u2003\u2009\u2009| |\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003The Cylc Suite Engine [7.8.3] [INFO] ._____._. ._| |_____.\u2003\u2003\u2003\u2003\u2003\u2009Copyright (C) 2008-2019 NIWA [INFO] | .___| | | | | .___|\u2003& British Crown (Met Office) & Contributors. [INFO] | !___| !_! | | !___. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [INFO] !_____!___. |_!_____! This program comes with ABSOLUTELY NO WARRANTY; [INFO] \u2003\u2003\u2003\u2009.___! | \u2003\u2003\u2003\u2003\u2003see `cylc warranty`. \u2009It is free software, you [INFO] \u2003\u2003\u2003\u2009!_____! \u2003\u2003\u2003\u2003\u2003\u2009are welcome to redistribute it under certain [INFO] [INFO] *** listening on https://accessdev.nci.org.au:<port>/ *** [INFO] [INFO] To view suite server program contact information: [INFO] $ cylc get-suite-contact <suite-ID> [INFO] [INFO] Other ways to see if the suite is still running: [INFO] $ cylc scan -n '<suite-ID>' accessdev.nci.org.au [INFO] $ cylc ping -v --host=accessdev.nci.org.au <suite-ID> [INFO] $ ps -opid,args <PID> # on accessdev.nci.org.au TO DO --> Add Rose GUI image
    • 'HARD' restart Run rose suite-run --new, from inside the suite directory, if you want to overwrite any previous runs of the suite and begin completely afresh (WARNING!! This will overwrite all existing model output and logs).
    "},{"location":"models/run-a-model/run-access-cm/#reload-a-suite","title":"RELOAD a suite","text":"In some cases the suite needs to be updated without necessarily having to stop it (e.g. after fixing a typo in a file). Updating an active suite is called a 'reload', where the suite is 're-installed' and Cylc is updated with the changes (this is similar to a 'soft' restart, but with the new changes installed, so you may need to manually trigger failed tasks from the Cylc GUI). To reload a suite run rose suite-run --reload from inside the suite directory."},{"location":"models/run-a-model/run-access-cm/#suite-output-files","title":"Suite output files","text":"All output files (as well as work files) are available on Gadi under /scratch/$PROJECT/$USER/cylc-run/<suite-ID> (also symlinked in ~/cylc-run/<suite-ID>). While the suite is running, files move between the share and the work directories. At the end of each cycle, model output data and restart files are moved to /scratch/$PROJECT/$USER/archive/<suite-name>. This directory contains 2 subdirectories:
    • history This is the directory where the model output data is found, separated for each model component:
      • atm \u2192 atmosphere (UM)
      • cpl \u2192 coupler (OASIS3-MCT)
      • ocn \u2192 ocean (MOM)
      • ice \u2192 ice (CICE)
      For the atmospheric output data, each file it is usually a UM fieldsfile or netCDF file, formatted as <suite-name>a.p<output-stream-identifier><year><month-string>. In the case of the u-br565 suite we will have: cd /scratch/<$PROJECT>/<USER>/archive ls br565 <suite-name> <other-suite-name> cd br565 ls history restart ls history/atm br565a.pd0950apr.nc br565a.pd0950aug.nc br565a.pd0950dec.nc br565a.pd0950feb.nc br565a.pd0950jan.nc br565a.pd0950jul.nc br565a.pd0950jun.nc br565a.pd0950mar.nc br565a.pd0950may.nc br565a.pd0950nov.nc br565a.pd0950oct.nc br565a.pd0950sep.nc br565a.pd0951apr.nc br565a.pd0951aug.nc br565a.pd0951dec.nc br565a.pm0950apr.nc br565a.pm0950aug.nc br565a.pm0950dec.nc br565a.pm0950feb.nc br565a.pm0950jan.nc br565a.pm0950jul.nc br565a.pm0950jun.nc br565a.pm0950mar.nc br565a.pm0950may.nc br565a.pm0950nov.nc br565a.pm0950oct.nc br565a.pm0950sep.nc br565a.pm0951apr.nc br565a.pm0951aug.nc br565a.pm0951dec.nc netCDF
    • restart This is the directory where the restart dumps are found, subdivided for each model component (see history folder above). For the atmospheric restart files, each of them is a UM fieldsfile, formatted as <suite-name>a.da<year><month><day>_00. In the directory there are also some files formatted as <suite-name>a.xhist-<year><month><day> containing metadata information. In the case of the u-br565 suite we will have: ls /scratch/<$PROJECT>/<USER>/archive/br565/restart/atm br565a.da09500201_00 br565a.da09510101_00 br565.xhist-09500131 br565.xhist-09501231
    References
    • https://confluence.csiro.au/display/ACCESS/Using+CM2+suites+in+Rose+and+Cylc
    • https://confluence.csiro.au/display/ACCESS/Understanding+CM2+output
    • https://nespclimate.com.au/wp-content/uploads/2020/10/Instruction-document-Getting_started_with_ACCESS.pdf
    • https://code.metoffice.gov.uk/doc/um/latest/um-training/rose-gui.html
    "},{"location":"models/run-a-model/run-access-esm/","title":"Run ACCESS-ESM","text":""},{"location":"models/run-a-model/run-access-esm/#requirements","title":"Requirements","text":"Before running ACCESS-ESM, you need to make sure to possess the right tools and to have an account with specific institutions."},{"location":"models/run-a-model/run-access-esm/#general-requirements","title":"General requirements","text":"For the general requirements needed to run all ACCESS models, please refer to the Getting Started (TO DO check link) page."},{"location":"models/run-a-model/run-access-esm/#model-specific-requirements","title":"Model-specific requirements","text":"
    • Payu To get payu on Gadi, run:
      module use /g/data/hh5/public/modules\n            module load conda/analysis3\n        
    "},{"location":"models/run-a-model/run-access-esm/#get-access-esm-configuration","title":"Get ACCESS-ESM configuration","text":"A suitable ACCESS-ESM pre-industrial configuration is avaible on the coecms GitHub. In order to get it, on Gadi, create a directory where to keep the model configuration, and clone the GitHub repo in it by running:
    git clone https://github.com/coecms/esm-pre-industrial
    mkdir -p ~/access-esm cd ~/access-esm git clone https://github.com/coecms/esm-pre-industrial Cloning into 'esm-pre-industrial'... remote: Enumerating objects: 767, done. remote: Counting objects: 100% (295/295), done. remote: Compressing objects: 100% (138/138), done. remote: Total 767 (delta 173), reused 274 (delta 157), pack-reused 472 Receiving objects: 100% (767/767), 461.57 KiB | 5.24 MiB/s, done. Resolving deltas: 100% (450/450), done. Note: Some modules might interfere with the git commands (for example matlab/R2018a). If you are running into issues during the cloning of the repository, it might be a good idea to run
    module purge
    first, before trying again."},{"location":"models/run-a-model/run-access-esm/#edit-access-esm-configuration","title":"Edit ACCESS-ESM configuration","text":"In order to modify an ACCESS-ESM configuration, it is worth understanding a bit more how its job scheduler payu works."},{"location":"models/run-a-model/run-access-esm/#payu","title":"Payu","text":"Payu is a workflow management tool for running numerical models in supercomputing environments. The general layout of a payu-supported model run consists of two main directories:
    • The laboratory is the directory where all parts of the model are kept. For ACCESS-ESM, it is typically /scratch/$PROJECT/$USER/access-esm.
    • The control directory, where the model configuration is kept and from where the model is run (in our case is the cloned directory ~/access-esm/esm-pre-industrial).
    This distinction of directories keeps the small-size configuration files separated from the larger binary outputs and inputs. In this way, we can place the configuration files in the $HOME directory (being the only filesystem on Gadi that is actively backed up), without overloading it with too much data. Moreover, this separation allows to run multiple self-resubmitting experiments simultaneously that might share common executables and input data. To proceed with the setup of the laboratory directory, from the control directory run:
    payu init
    This will create the laboratory directory, along with other subdirectories (depending on the configuration). The main subdirectories we are interested in are:
    • work \u2192 temporary directory where the model is actually run. It gets cleaned after each run.
    • archive \u2192 directory where the output is placed after each run.
    • cd ~/access-esm/esm-pre-industrial payu init laboratory path: /scratch/$PROJECT/$USER/access-esm binary path: /scratch/$PROJECT/$USER/access-esm/bin input path: /scratch/$PROJECT/$USER/access-esm/input work path: /scratch/$PROJECT/$USER/access-esm/work archive path: /scratch/$PROJECT/$USER/access-esm/archive
    "},{"location":"models/run-a-model/run-access-esm/#edit-the-master-configuration-file","title":"Edit the Master Configuration file","text":"The config.yaml file, located in the control directory, is the Master Configuration file. This file controls the general model configuration and if we open it in a text editor, we can see different parts:
    • PBS resources
      jobname: pre-industrial\n            queue: normal\n            walltime: 20:00:00\n        
      These are settings for the PBS scheduler. Edit lines in this section to change any of the PBS resources. For example, to run ACCESS-ESM under the tm70 project (TO DO add Getting started, join a NCI Project link), add the following line to this section:
      project: tm70
    • Link to the laboratory directory
      # note: if laboratory is relative path, it is relative to /scratch/$PROJECT/$USER\n            laboratory: access-esm\n        
      This will set the laboratory directory. Relative paths are relative to /scratch/$PROJECT/$USER. Absolute paths can be specified as well.
    • Model
      model: access
      The main model. This tells payu which driver to use (access stands for ACCESS-ESM).
    • Submodels
      submodels:\n            \u00a0\u00a0- name: atmosphere\n            \u00a0\u00a0\u00a0\u00a0model: um\n            \u00a0\u00a0\u00a0\u00a0ncpus: 192\n            \u00a0\u00a0\u00a0\u00a0exe: /g/data/access/payu/access-esm/bin/coe/um7.3x\n            \u00a0\u00a0\u00a0\u00a0input:\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/atmosphere\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/start_dump\n            \u00a0\u00a0- name: ocean\n            \u00a0\u00a0\u00a0\u00a0model: mom\n            \u00a0\u00a0\u00a0\u00a0ncpus: 180\n            \u00a0\u00a0\u00a0\u00a0exe: /g/data/access/payu/access-esm/bin/coe/mom5xx\n            \u00a0\u00a0\u00a0\u00a0input:\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/ocean/common\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/ocean/pre-industrial\n            \u00a0\u00a0- name: ice\n            \u00a0\u00a0\u00a0\u00a0model: cice\n            \u00a0\u00a0\u00a0\u00a0ncpus: 12\n            \u00a0\u00a0\u00a0\u00a0exe: /g/data/access/payu/access-esm/bin/coe/cicexx\n            \u00a0\u00a0\u00a0\u00a0input:\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/ice\n            \u00a0\u00a0- name: coupler\n            \u00a0\u00a0\u00a0\u00a0model: oasis\n            \u00a0\u00a0\u00a0\u00a0ncpus: 0\n            \u00a0\u00a0\u00a0\u00a0input:\n            \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- /g/data/access/payu/access-esm/input/pre-industrial/coupler\n        
      ACCESS-ESM is a coupled model, which means it has multiple submodels (i.e. model components). This section specifies the submodels and contains configuration options (for example the directories of input files) that are required to ensure the model can execute correctly. Each submodel also has additional configuration options that are read in when the submodel is running. These specific configuration options are found in the subdirectory of the control directory having the name of the submodel (e.g. in our case the configuration for the atmosphere submodel, i.e. the UM, will be in the directory ~/access-esm/esm-pre-industrial/atmosphere).
    • collate
      collate:\n            \u00a0\u00a0exe: /g/data/access/payu/access-esm/bin/mppnccombine\n            \u00a0\u00a0restart: true\n            \u00a0\u00a0mem: 4GB\n        
      The collate process joins a number of smaller files, which contain different parts of the model grid, together into target output files. The restart files are typically tiled in the same way and will also be joined together if the restart option is set to true.
    • restart
      restart: /g/data/access/payu/access-esm/restart/pre-industrial
      The location of the files used for a warm restart.
    • Start date and internal run length
      calendar:\n            \u00a0\u00a0start:\n            \u00a0\u00a0\u00a0\u00a0year: 101\n            \u00a0\u00a0\u00a0\u00a0month: 1\n            \u00a0\u00a0\u00a0\u00a0days: 1\n            \u00a0\u00a0runtime:\n            \u00a0\u00a0\u00a0\u00a0years: 1\n            \u00a0\u00a0\u00a0\u00a0months: 0\n            \u00a0\u00a0\u00a0\u00a0days: 0\n        
      This section specifies the start date and internal run length. Note: The internal run length (controlled by runtime) can be different from the total run length. Also, the runtime value can be lowered, but should not be increased to a total of more than 1 year, to avoid errors. If you want to know more about the difference between internal run and total run lenghts, or if you want to run the model for more than 1 year, check Run configuration for multiple years.
    • Number of runs per PBS submission
      runspersub: 5
      ACCESS-ESM configurations are often run in multiple steps (or cycles), with payu running a maximum of runspersub internal runs for every PBS job submission. Note: If we increase runspersub, we might need to increase the walltime in the PBS resources.
    To know more about other configuration settings for the config.yaml file, please check how to configure your experiment with payu."},{"location":"models/run-a-model/run-access-esm/#run-access-esm-configuration","title":"Run ACCESS-ESM configuration","text":"

    After editing the configuration, we are ready to run ACCESS-ESM. ACCESS-ESM suites run on Gadi through a PBS job submission managed by payu.

    "},{"location":"models/run-a-model/run-access-esm/#payu-setup-optional","title":"Payu setup (optional)","text":"As a first step, from the control directory, is good practice to run:
    payu setup
    This will prepare the model run, based on the experiment configuration. payu setup laboratory path: /scratch/$PROJECT/$USER/access-esm binary path: /scratch/$PROJECT/$USER/access-esm/bin input path: /scratch/$PROJECT/$USER/access-esm/input work path: /scratch/$PROJECT/$USER/access-esm/work archive path: /scratch/$PROJECT/$USER/access-esm/archive Loading input manifest: manifests/input.yaml Loading restart manifest: manifests/restart.yaml Loading exe manifest: manifests/exe.yaml Setting up atmosphere Setting up ocean Setting up ice Setting up coupler Checking exe and input manifests Updating full hashes for 3 files in manifests/exe.yaml Creating restart manifest Updating full hashes for 30 files in manifests/restart.yaml Writing manifests/restart.yaml Writing manifests/exe.yaml Note: You can skip this step as it is included also in the run command. However, runnning it explicitly helps to check for errors and make sure executable and restart directories are accessible."},{"location":"models/run-a-model/run-access-esm/#run-configuration","title":"Run configuration","text":"To run ACCESS-ESM configuration for one internal run length (controlled by runtime in the config.yaml file), run:
    payu run -f
    This will submit a single job to the queue with a total run length of runtime. It there is no previous run, it will start from the start date indicated in the config.yaml file, otherwise it will perform a warm restart from a precedently saved restart file. Note:The -f option ensures that payu will run even if there is an existing non-empty work directory, which happens if a run crashes. payu run -f Loading input manifest: manifests/input.yaml Loading restart manifest: manifests/restart.yaml Loading exe manifest: manifests/exe.yaml payu: Found modules in /opt/Modules/v4.3.0 qsub -q normal -P <project> -l walltime=11400 -l ncpus=384 -l mem=1536GB -N pre-industrial -l wd -j n -v PAYU_PATH=/g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin,MODULESHOME=/opt/Modules/v4.3.0,MODULES_CMD=/opt/Modules/v4.3.0/libexec/modulecmd.tcl,MODULEPATH=/g/data3/hh5/public/modules:/etc/scl/modulefiles:/opt/Modules/modulefiles:/opt/Modules/v4.3.0/modulefiles:/apps/Modules/modulefiles -W umask=027 -l storage=gdata/access+gdata/hh5 -- /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin/python3.9 /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.01/bin/payu-run <job-ID>.gadi-pbs"},{"location":"models/run-a-model/run-access-esm/#run-configuration-for-multiple-years","title":"Run configuration for multiple years","text":"If you want to run ACCESS-ESM configuration for multiple internal run lengths (controlled by runtime in the config.yaml file), you can use the option -n:
    payu run -n <number-of-runs>
    This will run the configuration number-of-runs times with a total run length of runtime * number-of-runs. The number of consecutive PBS jobs submitted to the queue depends on the runspersub value specified in the config.yaml file."},{"location":"models/run-a-model/run-access-esm/#understand-runtime-runspersub-and-n-parameters","title":"Understand runtime, runspersub, and -n parameters","text":"With the correct use of runtime, runspersub, and -n parameters, we can have full control of our run.
    • runtime defines the internal run length.
    • runspersub defines the maximum number of internal runs for every PBS job submission.
    • -n sets the number of internal runs to be performed.
    Let's have some practical examples:
    • Run 20 years of simulation, with resubmission every 5 years To have a total run length of 20 years, with a resubmition cycle of 5 years, we can leave runtime to the default value of 1 year, set runspersub to 5, and run the configuration using -n 20:
      payu run -n 20
      This will submit subsequent jobs for the following years: 1 to 5, 6 to 10, 11 to 15, and 16 to 20. With a total of 4 PBS jobs.
    • Run 7 years of simulation, with resubmission every 3 years To have a total run length of 7 years, with a resubmition cycle of 3 years, we can leave runtime to the default value of 1 year, set runspersub to 3, and run the configuration using -n 7:
      payu run -n 7
      This will submit subsequent jobs for the following years: 1 to 3, 4 to 6, and 7. With a total of 3 PBS jobs.
    • Run 3 months and 10 days of simulation, in one single submission To have a total run length of 3 months and 10 days, all in a single submission, we have to set runtime to:
      years: 0\n            months: 3\n            days: 10\n        
      set runspersub to 1 (or any value > 1), and run the configuration without -n (or with -n equals 1):
      payu run
    • Run 1 year and 4 months of simulation, with resubmission every 4 months To have a total run length of 1 year and 4 months (16 months), we will have to split it into multiple internal runs. For example, we can have 4 internal runs of 4 months each. Therefore, we will have to set runtime to:
      years: 0\n            months: 4\n            days: 0\n        
      Since the internal run length is set to 4 months, to resubmit our jobs every 4 months (i.e. every internal run), we have to set runspersub to 1. Finally, we will perform 4 internal runs by running the configuration with -n 4:
      payu run -n 4
    "},{"location":"models/run-a-model/run-access-esm/#monitor-runs","title":"Monitor runs","text":"Currently, there is no specific tool to monitor ACCESS-ESM runs. One way to check the status of our run is running:
    qstat -u $USER
    This will show the status of all your PBS jobs (if there is any PBS job submitted): qstat -u $USER Job id\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Name\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0User\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Time Use\u00a0S Queue --------------------- ---------------- ---------------- -------- - ----- <job-ID>.gadi-pbs\u00a0\u00a0\u00a0\u00a0\u00a0pre-industrial\u00a0\u00a0\u00a0<$USER>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<time>\u00a0R\u00a0normal-exec <job-ID>.gadi-pbs\u00a0\u00a0\u00a0\u00a0\u00a0<other-job-name>\u00a0<$USER>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<time>\u00a0R\u00a0normal-exec <job-ID>.gadi-pbs\u00a0\u00a0\u00a0\u00a0\u00a0<other-job-name>\u00a0<$USER>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<time>\u00a0R\u00a0normal-exec If you changed the jobname in the PBS resources of the Master Configuration file, that will be your job's Name instead of pre-industrial. S indicates the status of your run:
    • Q \u2192 Job waiting in the queue to start
    • R \u2192 Job running
    • E \u2192 Job ending
    If there is no listed job with your jobname (or if there is no job submitted at all), your run might have successfully completed, or might have been terminated due to an error."},{"location":"models/run-a-model/run-access-esm/#check-the-output-and-error-log-files","title":"Check the output and error log files","text":"While the model is running, payu saves the standard output and standard error into the files access.out and access.err in the control directory. You can examine these files, as the run progresses, to check on it's status. After the model has completed its run, or if it crashed, the output and error log files, respectively, are renamed by default into jobname.o<job-ID> and jobname.e<job-ID>."},{"location":"models/run-a-model/run-access-esm/#model-outputs","title":"Model outputs","text":"While the configuration is running, output files (as well as restart files) are moved from the work directory to the archive directory, under /scratch/$PROJECT/$USER/access-esm/archive (also symlinked in the control directory under ~/access-esm/esm-pre-industrial/archive). Both outputs and restarts are stored into subfolders for each different configuration (esm-pre-industrial in our case), and inside the configuration folder, they are subdivided for each internal run. The format of a typical output folder is outputXXX, whereas the typical restart folder is usually formatted as restartXXX, with XXX being the number of internal run, starting from 000. In the respective folders, outputs and restarts are separated for each model component. For the atmospheric output data, each file it is usually a UM fieldsfile, formatted as <UM-suite-identifier>a.p<output-stream-identifier><time-identifier>. cd /scratch/$PROJECT/$USER/access-esm/archive/esm-pre-industrial ls output000 pbs_logs restart000 ls output000/atmosphere aiihca.daa1210 aiihca.daa1810 aiihca.paa1apr aiihca.paa1jun aiihca.pea1apr aiihca.pea1jun aiihca.pga1apr aiihca.pga1jun atm.fort6.pe0 exstat ihist prefix.CNTLGEN UAFLDS_A aiihca.daa1310 aiihca.daa1910 aiihca.paa1aug aiihca.paa1mar aiihca.pea1aug aiihca.pea1mar aiihca.pga1aug aiihca.pga1mar cable.nml fort.57 INITHIS prefix.PRESM_A um_env.py aiihca.daa1410 aiihca.daa1a10 aiihca.paa1dec aiihca.paa1may aiihca.pea1dec aiihca.pea1may aiihca.pga1dec aiihca.pga1may CNTLALL ftxx input_atm.nml SIZES xhist aiihca.daa1510 aiihca.daa1b10 aiihca.paa1feb aiihca.paa1nov aiihca.pea1feb aiihca.pea1nov aiihca.pga1feb aiihca.pga1nov CONTCNTL ftxx.new namelists STASHC aiihca.daa1610 aiihca.daa1c10 aiihca.paa1jan aiihca.paa1oct aiihca.pea1jan aiihca.pea1oct aiihca.pga1jan aiihca.pga1oct debug.root.01 ftxx.vars nout.000000 thist aiihca.daa1710 aiihca.daa2110 aiihca.paa1jul aiihca.paa1sep aiihca.pea1jul aiihca.pea1sep aiihca.pga1jul aiihca.pga1sep errflag hnlist prefix.CNTLATM UAFILES_A References
    • https://github.com/coecms/esm-pre-industrial
    • https://payu.readthedocs.io/en/latest/usage.html
    "},{"location":"models/run-a-model/run-access-om/","title":"Running ACCESS-OM2 Model","text":"

    Welcome to ACCESS-OM2 \u2014 a coupled ocean-ice model and collection of configurations developed by the Consortium for Ocean-Sea Ice Modelling in Australia (COSIMA).

    ACCESS-OM2 consists of the MOM 5.1 ocean model, CICE 5.1.2 sea ice model, and a file-based atmosphere called YATM coupled together using the OASIS3-MCT v2.0 coupler. Regridding is done using ESMF and KDTREE2.

    The configurations available here are updated from the version 1.0 configurations described in Kiss et al. (2020). Further details are given in the ACCESS-OM2 technical report.

    "},{"location":"models/run-a-model/run-access-om/#how-to-access-existing-access-om2-output","title":"How to access existing ACCESS-OM2 output","text":"

    NCI users can access model output via the COSIMA Cookbook. A good place to start is the data explorer, which will give an overview of the data available. Also see this overview of 0.1\u00b0 IAF outputs.

    Non-NCI users can access a subset of the ACCESS-OM2 output via the COSIMA Model Output Collection.

    "},{"location":"models/run-a-model/run-access-om/#how-to-run-access-om2","title":"How to run ACCESS-OM2","text":"

    Start by reading the [[Quick start|Getting-started#quick-start]] guide. If you are using gadi.nci.org.au at the NCI National Facility and are happy to use our pre-compiled executables then this should be all you need. The page also provides instructions for building your own executables.

    NOTE: All ACCESS-OM2 model components and configurations are undergoing continual improvement. We strongly recommend that you \"watch\" this repo (see button at top of screen; ask to be notified of all conversations) and also watch all the component models, whichever configuration(s) you are using, and payu to be kept informed of updates, problems and bug fixes as they arise.

    "},{"location":"models/run-a-model/run-access-om/#getting-help-and-reporting-issues","title":"Getting help and reporting issues","text":"

    For all help requests and error reports please create a \"GitHub issue\" at ACCESS-OM2 issues.

    "},{"location":"models/run-a-model/run-access-om/#for-self-help","title":"For self-help","text":"

    Setting up and running the model is primarily supported via the [[ACCESS-OM2 wiki|Home]] (that you are already reading). It is a \"wiki\" so feel free to correct and contribute.

    "},{"location":"models/run-a-model/run-access-om/#how-to-update-this-wiki","title":"How to update this wiki","text":"

    The wiki attached to a public repository can be edited by anyone. Just navigate to the page you wish to edit and click on the 'edit' button on the top right hand side.

    "},{"location":"models/run-a-model/run-access-om/#references","title":"References","text":""},{"location":"models/run-a-model/getting_started/","title":"Getting Started to Run a Model","text":"

    If Model, Model Component or Model Configuration are not familiar terms for you, please check out our Model overview.

    If you have not run a model before, our Getting Started Guide will give you the basics to access the Model infrastructure on the high-performance-computing facility Gadi@NCI.

    Detailed guides for the different Model configurations can then be found on the following pages: - Run ACCESS-ESM for the ACCESS Earth System Model configurations - Run ACCESS-CM for the ACCESS Coupled Model configurations - Run ACCESS-AM for the ACCESS Atmosphere Model configurations - Run ACCESS-OM for the ACCESS Ocean Model configurations

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/","title":"Getting Started: Computing Access (Gadi@NCI)","text":"

    Here, we provide you the important information to give you access to the large data that we curate at NCI's storage:

    1) Get an NCI Account 2) Join relevant NCI projects 3) Logging in to Gadi@NCI 4) Computing on Gadi

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#1-nci-account","title":"1) NCI Account","text":"

    To be able to work with our data, you need an NCI account.

    If you don't have one yet, signup here.

    Note: You will need an institutional email address with an organisation that allows access to NCI (e.g., CSIRO, a university, etc.).

    Once you have signed up, you will be allocated a username. We will refer to this username (e.g. kf1234) as $USER.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#2-join-relevant-nci-projects","title":"2) Join relevant NCI projects","text":"

    There is a plethora of NCI projects that may or may not be relevant for you.

    We recommend you have a chat with your supervisor to identify the relevant projects, but in any case suggest to join xp65 for MED code as well as kj13 for MED data.

    To get this conversation started, we list some possibly relevant projects below:

    Project Description with link, * indicated compute resource ACCESS-NRI projects tm70 ACCESS-NRI Working Project * iq82 ACCESS-NRI MED Compute * kj13 ACCESS-NRI MED Data Dev ct11 ACCESS-NRI Replicated Datasets xp65 ACCESS-NRI Analysis Environments ACCESS projects access ACCESS software sharing p66 ACCESS - AOGCM / suppport development of the ACCESS modelling system * p73 ACCESS Model Output Archive (AOGCM) Data projects hh5 Climate-LIEF Data Storage ub7 Seasonal Prediction ACCESS-S1 Hindcast ux62 Seasonal Prediction ACCESS-S2 Hindcast cb20 ESGF CMIP3 Replication Data al33 ESGF CMIP5 Replication Data rr3 ESGF CMIP5 Australian Data Publication oi10 ESGF CMIP6 Replication Data fs38 ESGF CMIP6 Australian Data Publication rt52 ERA5 Replicated Data: Single and pressure-levels data uc16 ERA5 Replicated Datasets on Potential Temperature & Vorticity Levels zz93 ERA5-Land Replicated Data zv2 Australian Gridded Climate Data (AGCD) Collection qv56 Reference Datasets for Climate Model Analysis/Forcing cj50 COSIMA Model Output Collection Other projects ik11 COSIMA shared working space v45 Ocean Extremes * ga6 Modelling the formation of sedimentary basins and continental margins * m18 Evolution and dynamics of the Australian lithosphere * q97 Earth dynamics and resources over the last billion years * qu79 Collaborative REAnalysis Technical Environment Intercomparison Project (CREATE-IP)

    To join a project or find more projects, please use this NCI website.

    The first project that you join will become your default login project, e.g. xp65. We will refer to it as $PROJECT and we show you how to change it below.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#3-logging-in-to-gadinci","title":"3) Logging in to Gadi@NCI","text":"

    If you have never logged onto Gadi before, we recommend to take a look at NCI's Welcome to Gadi website. It provides all the important commands and information for logging properly onto Gadi, like the following: \"To run jobs on Gadi, you need to first log in to the system. Users on Mac/Linux can use the built-in terminal. For Windows users, we recommend using MobaXterm as the local terminal. Logging in to Gadi happens through a Gadi login node.\"

    When you login, via the command

    ssh -Y $USER@gadi.nci.org.au\n
    you will enter your $HOME directory with your default $PROJECT and your default SHELL. Both are saved at $HOME/.config/gadi-login.conf and you can print them via
    cat $HOME/.config/gadi-login.conf\n

    The -Y option is needed to run graphical tools by enabling the forwarding of trusted X protocol mesgs between X-Server on local system and X programs on Gadi. You need to enable X Windowing system on your local system before running ssh. This can be done by running X-Server like XQuartz (Mac), MobaXterm (MS Windows), startx or similar (Linux).

    Again, for more useful information we recommend to check out NCI's Welcome to Gadi website.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#4-computing-on-gadi","title":"4) Computing on Gadi","text":""},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#gadi-resources","title":"Gadi Resources","text":"

    Coupled climate models like ACCESS-CM involve, among other things, calculation of complex mathematical equations that explain the physics of the atmosphere and oceans. Performed at hundreds of millions of points around the Earth, these calculations require vast computing power to complete them in a reasonable amount of time, thus relying on the power of high-performance computing (HPC) like Gadi. The Gadi supercomputer can handle more than 10 million billion (10 quadrillion) calculations per second and is connected to 100,000 Terabytes of high-performance research data storage.

    An overview of Gadi resources such as compute, storage and PBS jobs are described below.

    Useful NCI commands to check your available compute resources are:

    Command Purpose logout or Ctrl+D To exit a session hostname Displays login node details module list Modules currently loaded module avail Available modules nci_account -P [proj] Compute allocation for [proj] nqstat -P [proj] Jobs running/queued in [proj] lquota Storage allocation and usage for all your projects"},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#compute-hours","title":"Compute Hours","text":"

    Compute allocations are granted to projects instead of directly to users and, hence, you need to be a member of a project in order to use its compute allocation. To run jobs on Gadi, you need to have sufficient allocated compute hours available, where the job cost depends on the resources reserved for the job and the amount of walltime it uses.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#storage","title":"Storage","text":"

    Each user has a project-independent $HOME directory, which has a storage limit of 10 GiB. All data on /home is backed up.

    Through project membership, the user gets access to the storage space within the project folders /scratch and /g/data filesystems for that particular project.

    "},{"location":"models/run-a-model/getting_started/access_to_gadi_at_nci/#pbs-jobs","title":"PBS Jobs","text":"

    To run compute tasks such as an ACCESS-CM suite on Gadi, users need to submit them as jobs to queues. Within a job submission, you can specify the queue, duration and computational resources needed for your job. When a job submission is accepted, it is assigned a jobID (shown in the return message) that can then be used to monitor the job\u2019s status.

    On job completion, contents of the job\u2019s standard output/error stream gets copied to a file in the working directory with the respective format: <jobname>.o<jobid> and <jobname>.e<jobid>. Users should check these two log files before proceeding with post-processing of any output from their corresponding job.

    "}]} \ No newline at end of file diff --git a/development_site/sitemap.xml.gz b/development_site/sitemap.xml.gz index 3595027ad..72ecfb102 100644 Binary files a/development_site/sitemap.xml.gz and b/development_site/sitemap.xml.gz differ