All command suggestions below assume a Linux operating system with makefile installed.
The easiest way to set up all the needed tools is to create an Anaconda environment from the given environment.yml. Just type:
make condaThen, before running all the commands below make sure you activate the environment
source activate dicedpsIf working with a Modules package, make sure you are in a clean state:
module purge
module load anaconda3/4.3.0 cmake/3.8.2 gcc/5.3.1 netcdf/4.4.0 openmpi/1.10.1 openjdk/1.8.0
export PYTHONPATH=Type:
make brick-installto install the missing R packages and compile the Fortran submodules needed for BRICK.
Create an output directory
mkdir -p output/brickTest the BRICK calibration machinery with:
make brick-calib-cauchy NCHAIN=2 NITER=10000This should create a file brick_mcmc_fgiss_TgissOgour_scauchy_t18802011_z18801900_o4_h50_n10000.rds under output/brick containing two Markov chains with 1e4 samples each.
Change the definition of LONGRUN in makefile_common to match your setup for running computationally expensive make targets. In my case I have a bash function qmake.sh that creates a .pbs file and submit a job to a cluster to make the given target with the required resources.
By default, NITER (number of MCMC samples) is 20e6, and NCHAIN (number of MCMC parallel chains) is 10. To reduce the resources needed, decrease NITER and/or NCHAIN in pkgs/brick/makefile. Then run from the main directory:
make brick-calib-allDiagnostics and the final chains are created with:
make brick-diag-allYour output directory should look like this:
output/
└── [4.0K] brick
├── [826K] brick_mcmc_fgiss_TgissOgour_scauchy_t18802011_z18801900_o4_h50_n10000.rds
├── [9.4G] brick_mcmc_fgiss_TgissOgour_scauchy_t18802011_z18801900_o4_h50_n20000000.rds
├── [ 17M] brick_mcmc_fgiss_TgissOgour_scauchy_t18802011_z18801900_o4_h50_n20000000_t1000_b5-chains.rds
├── [6.3K] brick_mcmc_fgiss_TgissOgour_scauchy_t18802011_z18801900_o4_h50_n20000000_t1000_b5-hd.rds
├── [ 16M] brick_mcmc_fgiss_TgissOgour_scauchy_t18802011_z18801900_o4_h50_n20000000_t1000_b5.nc
├── [832K] brick_mcmc_fgiss_TgissOgour_schylek_t18802011_z18801900_o4_h50_n10000.rds
├── [9.4G] brick_mcmc_fgiss_TgissOgour_schylek_t18802011_z18801900_o4_h50_n20000000.rds
├── [ 19M] brick_mcmc_fgiss_TgissOgour_schylek_t18802011_z18801900_o4_h50_n20000000_t1000_b5-chains.rds
├── [6.3K] brick_mcmc_fgiss_TgissOgour_schylek_t18802011_z18801900_o4_h50_n20000000_t1000_b5-hd.rds
├── [ 18M] brick_mcmc_fgiss_TgissOgour_schylek_t18802011_z18801900_o4_h50_n20000000_t1000_b5.nc
├── [829K] brick_mcmc_fgiss_TgissOgour_spaleosens_t18802011_z18801900_o4_h50_n10000.rds
├── [9.4G] brick_mcmc_fgiss_TgissOgour_spaleosens_t18802011_z18801900_o4_h50_n20000000.rds
├── [ 15M] brick_mcmc_fgiss_TgissOgour_spaleosens_t18802011_z18801900_o4_h50_n20000000_t1000_b5-chains.rds
├── [6.4K] brick_mcmc_fgiss_TgissOgour_spaleosens_t18802011_z18801900_o4_h50_n20000000_t1000_b5-hd.rds
└── [ 14M] brick_mcmc_fgiss_TgissOgour_spaleosens_t18802011_z18801900_o4_h50_n20000000_t1000_b5.nc
where the *_t1000_b5{.nc,-chains.rds} files contain the thinned chains, and the *-hd.rds files contain diagnostic information.
Checking the output will give you information on the number of samples remaining.
Make sure you have rights to clone the serial-borg-moea repository. Then:
make borgIf compilation succeeds, you should have the following files among others:
pkgs/borg/build
|-- bin
| |-- borg
| |-- dtlz2_advanced
| |-- dtlz2_ms
| `-- dtlz2_serial
`-- lib
|-- libborg.so
`-- libborgms.so
Check that both serial and parallel versions work:
cd pkgs/borg/build/bin
./dtlz2_serial
# (make sure last number shown is 0.746296991175737445268)
mpirun -np 10 ./dtlz2_ms
# (stop with Ctrl-C and check runtime_0.txt)From the Anaconda environment, type:
pip install -r requirements.txtCheck that the Python bindings with Borg work:
python pkgs/borg4platypus/examples/simple_borg.pyHistorical forcings data stops at 2011, an automatic ARIMA is used to estimate missing forcings up to 2015.
make forcingsCreate an output dir:
mkdir output/dicedpsCheck that optimization works on a simple instance:
make opt-testThen check your cluster setup:
make opt-miniIf everything works, proceed with the full-scale optimization
make opt-fullThis should give you the following runtime files:
output/dicedps/ ├── u1w1000doeclim_mrbfXdX41_i1p400_nfe5000000_objv2_cnone_s1_seed0001_runtime.csv ├── u1w1000doeclim_mrbfXdX41_i1p400_nfe5000000_objv2_cnone_s2_seed0002_runtime.csv ├── u1w1000doeclim_mrbfXdX41_i1p400_nfe5000000_objv2_cnone_s3_seed0003_runtime.csv ├── u1w1000doeclim_mrbfXdX41_i1p400_nfe5000000_objv2_cnone_s4_seed0004_runtime.csv ├── u1w1000doeclim_mrbfXdX41_i1p400_nfe5000000_objv2_cnone_s5_seed0005_runtime.csv ├── u1w1000doeclim_mtime_i1p400_nfe5000000_objv2_cinertmax_s1_seed0001_runtime.csv ├── u1w1000doeclim_mtime_i1p400_nfe5000000_objv2_cinertmax_s2_seed0002_runtime.csv ├── u1w1000doeclim_mtime_i1p400_nfe5000000_objv2_cinertmax_s3_seed0003_runtime.csv ├── u1w1000doeclim_mtime_i1p400_nfe5000000_objv2_cinertmax_s4_seed0004_runtime.csv └── u1w1000doeclim_mtime_i1p400_nfe5000000_objv2_cinertmax_s5_seed0005_runtime.csv
with Borg iterations.
We’ll use PyPy for speeding things up. Make sure requirements are available.
pypy3 -m ensurepip
pypy3 -m pip install tqdmCheck Pareto merging with:
make par-testRun the actual merge with:
make par-mergedThis should give you the following merged Pareto files:
output/dicedps/ ├── u1w1000doeclim_mtime_i1p400_nfe5000000_objv2_cinertmax_s0_seed0000_merged.csv ├── u1w1000doeclim_mrbfXdX41_i1p400_nfe5000000_objv2_cnone_s0_seed0000_merged.csv
Install MOEAframework and other required tools.
make met-setupCheck that the setup works
make met-testThis should produce a file output/dicedps/u1w1000doeclim_mtime2_i1p400_nfe5000000_objv2_cnone_s1_seed0001_metrics.csv with the following metrics table:
#NFE ElapsedTime SBX DE PCX SPX UNDX UM Improvements Restarts +hypervolume 250000 2148.925116 0.182977 0.000784468 0.81565 0.000196117 0.000196117 0.000196117 34853 0 0.5646850294 500000 4292.769322 0.0343127 0.000328875 0.96503 0.000109625 0.000109625 0.000109625 62819 0 0.5666289431 750000 6439.791670 0.0143852 9.04732e-05 0.985253 9.04732e-05 9.04732e-05 9.04732e-05 84237 0 0.5675188842 [...]
Run
make met-allto compute the metrics for all runtime files.
Check the rerun script with:
make rerun-testRerun the whole thing with:
make rerun_v3Run the fig_*.py scripts under pkgs/dicedps/dicedps/plot/. Make sure to run the *_data.py first.
All the code in this repository, unless specified otherwise, is released under the GNU General Public License v3.