You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In light of the open PR on the standard for mesh refinement openPMD/openPMD-standard#252 I'm wondering what the current best practice is (also with regard to achieving good performance at scale).
In our case, each rank owns variable number of meshblocks (though their size is fixed) with a (potentially variable) number of variables and at variable levels (depending on the chosen ordering of blocks).
The most straightforward approach approach would be to create one mesh record per bock and variable.
Alternatively, I imagine pooling record by level (so that the coordinate information is shared wrt to dx) to increase the size of the output buffer.
Are there other approaches/recommendations?
And what's the impact on performance when we write one chunk per block (which at the api level would effectively be a serial "write" as each block/variable combination is unique)?
Are the actual writes on flush optimized/pooled/...?
For reference, our current HDF5 output wrote the data of all blocks in parallel for each variable with the corresponding offsets.
The coordinate information was stored separately, so that this large output buffer didn't need to handle different dx.
This approach is currently not compatible with the OpenPMD standard for meshes with varying dx as each record has a tight connection to fixed coordinates, correct?
Thanks,
Philipp
Software Environment:
Have you already installed openPMD-api?
If so, please tell us which version of openPMD-api your question is about:
version of openPMD-api: [0.15.2]
installed openPMD-api via: [from source]
The text was updated successfully, but these errors were encountered:
Since I had discussed this with @ax3l awhile ago re both Quokka and Parthenon, he might be able to chime in, particularly what the analysis tools assume about AMR ;)
I'm in the process of adding support for OpenPMD output to our block structured AMR code https://github.com/parthenon-hpc-lab/parthenon
In light of the open PR on the standard for mesh refinement openPMD/openPMD-standard#252 I'm wondering what the current best practice is (also with regard to achieving good performance at scale).
In our case, each rank owns variable number of meshblocks (though their size is fixed) with a (potentially variable) number of variables and at variable levels (depending on the chosen ordering of blocks).
The most straightforward approach approach would be to create one mesh record per bock and variable.
Alternatively, I imagine pooling record by level (so that the coordinate information is shared wrt to
dx
) to increase the size of the output buffer.Are there other approaches/recommendations?
And what's the impact on performance when we write one chunk per block (which at the api level would effectively be a serial "write" as each block/variable combination is unique)?
Are the actual writes on flush optimized/pooled/...?
For reference, our current HDF5 output wrote the data of all blocks in parallel for each variable with the corresponding offsets.
The coordinate information was stored separately, so that this large output buffer didn't need to handle different
dx
.This approach is currently not compatible with the OpenPMD standard for meshes with varying
dx
as each record has a tight connection to fixed coordinates, correct?Thanks,
Philipp
Software Environment:
Have you already installed openPMD-api?
If so, please tell us which version of openPMD-api your question is about:
The text was updated successfully, but these errors were encountered: