Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs start #7

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 14 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# RIME - Rapid Impact Model Emulator

2023 IIASA
2024 IIASA

[![latest](https://img.shields.io/github/last-commit/iiasa/CWatM)](https://github.com/iiasa/CWatM)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
Expand All @@ -15,9 +15,10 @@

When accompanied by climate impacts data (table and/or maps), RIME can be used to take a global mean temperature timeseries (e.g. from an IAM or climate model like [FaIR](https://github.com/OMS-NetZero/FAIR)/[MAGICC](https://live.magicc.org/)), and return tables and maps of climate impacts through time consistent with the warming of the scenario.

*** Key use cases ***
There are two key use-cases for the RIME approach:
1. **Post-process**: Estimating a suite of climate impacts from a global emissions or temperature scenario.
2. **Input**: Reformulating climate impacts data to be used as an input to an integrated assessment model scenario.
2. **Input**: Reformulating climate impacts data to be used as an input to an integrated assessment model scenario. First the scenario is run without climate impacts, to determine the emissions and global warming trajectory. Then, RIME can be used to generate climate impact-adjusted input variables for the IAM scenario.

![RIME_use_cases](https://github.com/iiasa/rime/blob/main/assets/rime_use_cases.jpg?raw=true)

Expand Down Expand Up @@ -49,10 +50,10 @@ Pre-processing of tabular impacts data of exposure by GWL, into netcdf datasets
### [`process_tabledata.py`](https://github.com/iiasa/rime/blob/main/rime/process_tabledata.py)
Example script that takes input table of emissions scenarios with global temperature timeseries, and output tables of climate impacts data in IAMC format. Can be done for multiple scenarios and indicators at a time.

### [`process_maps.py`](https://github.com/iiasa/rime/blob/main/rime/process_tabledata.py)
### [`process_maps.py`](https://github.com/iiasa/rime/blob/main/rime/process_maps.py)
Example script that takes input table of emissions scenarios with global temperature timeseries, and output maps of climate impacts through time as netCDF. Ouptut netCDF can be specified for either for 1 scenario and multiple climate impacts, or multiple scenarios for 1 indicator.

### [`pp_combined example.ipynb`](https://github.com/iiasa/rime/blob/main/rime/pp_combined_example.py)
### [`pp_combined example.ipynb`](https://github.com/iiasa/rime/blob/main/rime/pp_combined_example.ipynb)
Example jupyter notebook that demonstrates methods of processing both table and map impacts data for IAM scenarios.

### [`test_map_notebook.html`](https://github.com/iiasa/rime/blob/main/rime/test_map_notebook.html)
Expand All @@ -61,15 +62,21 @@ Example html maps dashboard. CLick download in the top right corner and open loc
![image](https://github.com/iiasa/rime/assets/17701232/801e2dbe-cbe8-482f-be9b-1457c92ea23e)


## Installation
## Code and installation

At command line, navigate to the directory where you want the installation, e.g. your Github folder.

git clone https://github.com/iiasa/rime.git

Change to the rime folder and install the package including the requirements.
### Using a dedicated environment (Optional but recommended)

Due to the dependencies, using a dedicated python environment for using RIME is recommended in order to avoid conflicts during installation. Depending on your python installation, this can be done using venv, pyenv, pipenv, (ana/mini)-conda, mamba.

### Installation
Activate the right environment, change to the rime folder (e.g. `cd c:/Github/rime`) and install the package including the requirements.

pip install .

pip install --editable .

## Further information
This package is in a pre-release mode, currently work in progress, under-going testing and not formally published.
Expand Down
24 changes: 24 additions & 0 deletions doc/configuration.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
Configuring your RIME runs


process_config.py
=================

This file is designed to configure settings and working directories for the project. It acts as a central configuration module to be imported across other scripts in the project, ensuring consistent configuration.

Key Features
------------

- **Central Configuration**: Stores and manages settings and directory paths that are used throughout the project.
- **Easy Import**: Can be easily imported with ``from process_config import *``, making all configurations readily available in other scripts.

Dependencies
------------

- ``os``: For interacting with the operating system's file system, likely used to manage file paths and directories.

Usage
-----

This script is not meant to be run directly. Instead, it should be imported at the beginning of other project scripts to ensure they have access to shared configurations, settings, and directory paths.

47 changes: 47 additions & 0 deletions doc/data_preprocessing.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
Pre-processing input table data
*********************

To work with table data, some pre-processing is likely required to achieve the correct formats.

The aim is to go from typically tabular or database data, into a compressed 4-D netCDF format that is used in the emulation. For a given climate impacts dataset, this pre-processing only needs to be done once for preparation, and only if working with table data. Depending on the input dataset size, this can take some time.

The output netCDF has the dimensions:
"gwl": for the global warming levels, at which impacts are calculated. (float)
"year": for the year to which the gmt corresponds, if relevant, for example relating to exposure of a population of land cover in year x.
"ssp": for the Shared Socioeconomic Pathway, SSP1, SSP2, SSP3, SSP4, SSP5. (str)
"region": for the spatial region for the impact relates and might be aggregated to, e.g. country, river basin, region. (str)


Thus, the input data table should also have these dimensions, normally as columns, and additionally one for `variable`.

[example picture of IAMC input file]

The script `generate_aggregated_inputs.py` gives an example of this workflow, using a climate impacts dataset in table form (IAMC-wide), and converting it into a netCDF, primarily using the function `loop_inteprolate_gwl()`. In this case the data also has the `model` and `scenario` columns, which are not needed in the output dataset.

generate_aggregated_inputs.py
=============================


Key Features
------------

- **Data Aggregation**: Combines data from multiple files or data streams.
- **File Operations**: Utilizes glob and os modules for file system operations, indicating manipulation of file paths and directories.
- **Data Processing**: Imports ``xarray`` for working with multi-dimensional arrays, and ``pyam`` for integrated assessment modeling frameworks, suggesting complex data manipulation and analysis.

Dependencies
------------

- ``alive_progress``: For displaying progress bars in terminal.
- ``glob``: For file path pattern matching.
- ``os``: For interacting with the operating system's file system.
- ``pyam``: For analysis and visualization of integrated assessment models.
- ``re``: For regular expression matching, indicating text processing.
- ``xarray``: For working with labeled multi-dimensional arrays.
- ``time``: For timing operations.

Usage
-----

Based on the test data, the intention here is to read in a file like `table_output_cdd_R10.xlsx` and output a file that looks like `cdd_R10.nc`

18 changes: 18 additions & 0 deletions doc/processing_maps.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@

process_maps.py
===============



Overview
--------

This example script takes an input table of emissions scenarios along with global temperature time series (`emissions_temp_AR6_small.xlsx`), and input gridded climate impacts data by global warming levels (e.g. `ISIMIP2b_dri_qtot_ssp2_2p0_abs.nc`) and generates maps of climate impacts over time as NetCDF files. It exemplifies the application of the RIME framework to spatially resolved climate impact data, remapping climate impacts data by global warming level to a trajectory of global mean temperature.

Usage
-----

The script's flexibility allows for the specification of outputs either for a single scenario across multiple climate impacts or for multiple scenarios focused on a single indicator.


By processing emissions scenarios and associated temperature projections, ``process_maps.py`` produces NetCDF files that map climate impacts over time.
29 changes: 29 additions & 0 deletions doc/processing_tables.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
Example script that takes input table of emissions scenarios with global temperature timeseries, and output tables of climate impacts data in IAMC format. Can be done for multiple scenarios and indicators at a time.





process_tabledata.py
====================

This script is intended for processing table data, potentially involving large datasets given the use of Dask for parallel computing. It likely includes functionalities for reading, processing, and possibly aggregating or summarizing table data.

Key Features
------------

- **Table Data Processing**: Focuses on operations related to table data, including reading, manipulation, and analysis.
- **Parallel Computing**: Utilizes Dask for efficient handling of large datasets, indicating the script is optimized for performance.

Dependencies
------------

- ``dask``: For parallel computing, particularly with ``dask.dataframe`` which is similar to pandas but with parallel computing capabilities.
- ``dask.diagnostics``: For performance diagnostics and progress bars, providing tools for profiling and resource management during computation.
- ``dask.distributed``: For distributed computing, allowing the script to scale across multiple nodes if necessary.

Usage
-----

The script is structured to be executed directly with a ``__main__`` block. It imports configurations from ``process_config.py`` and functions from ``rime_functions.py``, suggesting it integrates closely with other components of the project. Users may need to customize the script to fit their specific data formats and processing requirements.

1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ documentation = "https://github.com/iiasa/rime"
version = "0.1.0"
license = "GNU GPL v3"
readme = "README.md"
keywords = ""

[tool.poetry.dependencies]
python = ">=3.10, <3.11"
Expand Down
Loading