-
Notifications
You must be signed in to change notification settings - Fork 117
Extended MODFLOW User Guide
This guide describes how to obtain and run the extended version of MODFLOW 6 to utilize the advanced capabilities that have become available:
- Parallel Computing
- NetCDF4 I/O
The extended edition is a true extension in that it provides these extra features and still contains all functionality available in the standard executable. Any simulation input configuration that is valid for standard MODFLOW 6 is compatible with this extended version. Reversely, any simulation set up to be run in parallel, can be run with the standard executable.
The extended edition is a separate executable that, together with 3rd-party dependencies, is used in place of the standard MODFLOW executable. It can be either downloaded or built from sources, depending on the target OS and the available software environment. There are three major external dependencies to the program that need to be available on the system:
- MPI library for communication between the processes
- PETSc library (https://petsc.org/release/) for the parallel iterative solution
- NetCDF library for array-oriented scientific data, including the NetCDF Fortran API
A distribution for Windows (win64ext.zip
) is made available as part of the nightly build. It can be downloaded from this page. Alternatively, if your system administrator allows it, you can install WSL on your Windows machine and follow the instructions for Linux below.
For Linux and MacOS, you can build Extended MODFLOW yourself with the following recipe. It requires a modern fortran compiler (check the MODFLOW 6 github continuous integration workflows for supported compilers), a git client, and conda (or an alternative such as micromamba) installed.
First clone the GitHub repository for MODFLOW 6 to get the program source files
git clone https://github.com/MODFLOW-USGS/modflow6.git
Next, you will create a conda environment, called mf6xtd
. This mf6xtd
environment will have all of the software needed to compile serial and extended versions of MODFLOW 6. Here it is assumed that you have conda
installed on your system, but the instruction for alternatives such as micromamba
or miniconda
are equivalent.
Create an environment file that has the following contents and call it, e.g., mf6xtd_environment.yml
name: mf6xtd
channels:
- conda-forge
dependencies:
- pkg-config
- openmpi
- libnetcdf
- netcdf-fortran
- gcc
- gfortran
- petsc
- meson>=1.1.0
- ninja
With this file, the mf6xtd
environment will be created by executing the following command
conda env create -f mf6xtd_environment.yml
To build the extended version of MODFLOW, activate the environment
conda activate mf6xtd
and run the following commands from a terminal
cd modflow6
meson setup builddir -Ddebug=false -Dextended=true --prefix=$(pwd) --libdir=bin
meson install -C builddir
meson test --verbose --no-rebuild -C builddir
If everything is working properly, then the last command should show that the tests completed ok and without errors.
The installation from sources in described in more detail in the PARALLEL.md file.
This section contains a full workflow example for using Parallel MODFLOW. It is based on the FloPy package to pre- and post-process the data.
The following code snippet sets up a simple 1x1x10 strip model with constant head boundaries on either side, runs the simulation, and plots the resulting hydraulic head.
# import packages
import flopy
import numpy as np
import matplotlib.pyplot as plt
# set up the strip model
ws = "strip"
name = "strip"
sim = flopy.mf6.MFSimulation(sim_name=name, sim_ws=ws, exe_name='mf6')
tdis = flopy.mf6.ModflowTdis(sim)
ims = flopy.mf6.ModflowIms(sim, inner_dvclose=0.0001, outer_dvclose=0.001)
gwf = flopy.mf6.ModflowGwf(sim, modelname=name)
dis = flopy.mf6.ModflowGwfdis(gwf, nlay=1, nrow=1, ncol=10)
ic = flopy.mf6.ModflowGwfic(gwf, strt=0)
npf = flopy.mf6.ModflowGwfnpf(gwf, k=1.0)
chd = flopy.mf6.ModflowGwfchd(gwf, stress_period_data=[[(0, 0, 0), 1.],
[(0, 0, 9), 0.]])
head_file = name + '.hds'
oc = flopy.mf6.ModflowGwfoc(gwf,
head_filerecord=head_file,
saverecord=[('HEAD', 'LAST')],
printrecord=[('HEAD', 'LAST')])
# save to disk
sim.write_simulation(silent=True)
# run the model
sim.run_simulation(silent=True)
# plot the results
serial_head = gwf.output.head().get_data()
pmv = flopy.plot.PlotMapView(gwf)
_ = pmv.plot_array(serial_head)
Output:
Next the model is split into two independent models connected with a GWF-GWF exchange. The node mapping file is stored to be used for reconstructing the head arrays.
# split the model in two and write to disk
splitter = flopy.mf6.utils.Mf6Splitter(sim)
mask = 5*[0] + 5*[1]
par_sim = splitter.split_model(mask)
par_sim.set_sim_path("split_strip")
par_sim.write_simulation(silent=True)
# save the node mapping for merging the data after the simulation
splitter.save_node_mapping("node_mapping.json")
Run the simulation in parallel and recombine the heads:
# run the split model in parallel on two cores
par_sim.run_simulation(processors=2, silent=True)
# recombine heads from models
gwfs = [par_sim.get_model(n) for n in par_sim.model_names]
model_heads = dict((i, m.output.head().get_data()) for i,m in enumerate(gwfs))
reconstructed_head = splitter.reconstruct_array(model_heads)
# plot the results and the difference
serial_head = gwf.output.head().get_data()
pmv = flopy.plot.PlotMapView(gwf)
_ = pmv.plot_array(reconstructed_head)
Output:
The plots look identical, to check the maximum difference between the two simulation runs:
# calculate the maximum difference between parallel and serial
diff_max = np.amax(abs(reconstructed_head - serial_head))
print(f"Maximum absolute head difference with serial: {diff_max}")
Output:
Maximum absolute head difference with serial: 2.220446049250313e-16
The example above used the FloPy API to run the simulation in parallel on 2 CPU cores. Alternatively, the Extended MODFLOW executable is called directly from the command line using the MPI launcher:
mpiexec -np 2 mf6 -p
Just as for the standard executable, this command should be issued in the
working directory of the model, i.e., where the file mfsim.nam
is located.
The extended build of MODFLOW 6 can optionally create model NetCDF output files. MODFLOW 6 supports two types of NetCDF outputs, referred to here as "structured" and "mesh". MODFLOW 6 NetCDF outputs by default contain model dependent variable timeseries data, e.g. head.
MODFLOW 6 mesh outputs comply with UGRID 1.0 conventions and are based on the UGRID 3D layered mesh topology. A UGRID based NetCDF file explicitly describes the grid with a set of variables that gridded data can be associated with. MODFLOW 6 mesh outputs can be generated with a DIS or DISV based GWF, GWT, or GWE model.
MODFLOW 6 structured NetCDF outputs define x and y geometric coordinate variables and are based on the CF Metadata Conventions. MODFLOW 6 structured outputs can be generated with a DIS based GWF, GWT, or GWE model.
The split model name files from the above example can be updated to activate the creation of NetCDF outputs. Here we set one model to create a mesh output (UGRID) and the other a structured output, simply to illustrate these as distinct output types.
# update GWF model strip_0 to write mesh NetCDF output
par_sim.gwf[0].name_file.nc_mesh2d_filerecord = "strip_0.nc"
# update GWF model strip_1 to write structured NetCDF output
par_sim.gwf[1].name_file.nc_structured_filerecord = "strip_1.nc"
Running the simulation creates 2 model NetCDF outputs,
strip_0.nc
and strip_1.nc
:
# write and run the simulation
par_sim.write_simulation(silent=True)
par_sim.run_simulation(processors=2, silent=True)
Open the mesh NetCDF dataset with the xugrid library and access the layered (layer 1) head variable data:
import os
import xugrid as xu
mesh_nc_fpth = os.path.join("split_strip", "strip_0.nc")
mesh_ds = xu.open_dataset(mesh_nc_fpth)
mesh_xds = mesh_ds.ugrid.to_dataset()
Open the structured NetCDF dataset with the xarray library and access the fully gridded head variable data:
import xarray as xa
struct_nc_fpth = os.path.join("split_strip", "strip_1.nc")
struct_xds = xa.open_dataset(struct_nc_fpth)
Combine heads from single layer and row and compare to reconstructed heads:
col_heads = np.concatenate((mesh_xds["head_l1"][0].data, struct_xds["head"][0].data[0][0]))
assert np.allclose(reconstructed_head[0][0], col_heads) # True
Recent classes on MODFLOW 6 have included many of the new features that come with the extended edition of the software. The presentation material and the exercises are a good starting point for learning how to include the advanced functionality of the extended edition into your simulations.
-
Nov 2024: 1-day advanced MODFLOW 6 class in Delft, Netherlands - NetCDF, Parallel GWF and GWT
-
Sep 2024: 5-day class Advanced Modeling of Groundwater Flow (GW3099) in Boise, ID - NetCDF, Parallel GWF
-
May 2024: 2-Day MODFLOW 6 and FloPy class at the 2024 MODFLOW and More Conference in Princeton, NJ - NetCDF, Parallel GWF
-
Nov 2023: 1-Day parallel MODFLOW 6 class in Delft, Netherlands - Parallel GWF
-
Jul 2023: 2-Day parallel MODFLOW 6 class in Lakewood, CO - Parallel GWF
Basic examples scripted in FloPy, mostly consisting of synthetic models, are part of the autotest suite of MODFLOW 6 and can be a useful reference. The parallel test cases follow the file pattern test_par_*.py
, the NetCDF test cases test_netcdf_*.py
, both can be found in the folder autotest
Parallel MODFLOW has been designed as a generic component that is targeted to be used for all hydrologic models in the MODFLOW 6 simulator. It is (near) feature complete: every package and formulation in the serial simulation is available in a parallel setup. The few exceptions to this rule will be documented here. The currently supported process models are GWF, GWT, and GWE.
- The parallelization is based on a domain-decomposition: the simulated domain can be split over multiple models and run in parallel. It is not possible to run multiple instances of a model type on the same spatial domain in parallel. This would be the case when simulating the transport of multiple species on the same domain, for example.
- The GNC package on GWF-GWF exchanges that connects two models that are being run in parallel is not supported. A more convenient alternative is to set the XT3D option to improve the flow calculation at the interface.
- The mover package in the GWT-GWT exchange (MVT) to move solute between models is not yet supported.
- The mover package in the GWE-GWE exchange (MVE) to move energy between models is not yet supported.
- ...
NetCDF is an active development effort in MODFLOW 6 and is currently supported for Dis and Disv based GWF, GWT and GWE models. Report any issues if found and requests for enhancements will be considered.
- FloPy is not currently able to create MODFLOW 6 input files that support griddata variable NetCDF reads. Refer to the MODLFLOW 6 Description of Input and Output (mf6io.pdf) for information related to updating FloPy generated input files to support your NetCDF use case. Work is underway to bring FloPy into full alignment with MODFLOW 6 NetCDF I/O capabilities.
- NetCDF outputs of input griddata arrays are limited to package types supported by the Input Data Processor (IDP). See the Description of Input and Output (mf6io.pdf) for additional details.
- MODFLOW 6 NetCDF outputs are intended to optionally support visualization in some post-processing desktop tools. Limited testing has been done in QGIS, Panoply and ArcGIS. Other tools may not be supported.
- Angle of rotation is not yet taken into account in NetCDF outputs.
- ...
- ...
...