Visualizing Large AMR Datsets #19704
-
Howdy! I am trying to use VisIt to visualize data from an in-house block-based adaptive mesh refinement (AMR) CFD code in parallel. Each meshblock is a uniform cartesian grid with a fixed number of cells (32^3 cells in 3D, 64^2 cells in 2D). The data has about 40 scalars and around a million meshblocks usually resulting in 10-100 billion cells. My question is the following: Has anyone successfully visualized large datasets from a AMR based code in VisIt with millions of meshblocks/grids? If yes, can you please suggest the data format in which the output data is written and the VisIt plugin used to visualize such large datasets in parallel? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
Let me just share my own experience (as a user) on more modestly sized datasets: it seems that the performance of Visit drops rather drastically when the number of mesh blocks increases. E.g., writing a 1024^2 grid as 256^2 blocks of 2^2 cells will be many times slower than writing a single 1024^2 block. This is an issue I am also struggling with when loading AMR data into Visit, and the only somewhat satisfactory solution I have found is to reduce the number of mesh blocks by merging adjacent ones into larger blocks. |
Beta Was this translation helpful? Give feedback.
-
Howdy! Thanks for your response. I figured out that, if we use Chombo based output, we are able to visualize large datasets with millions of meshblocks. We tried upto 10 million meshblocks with 20 scalar outputs in a chombo hdf5 and VisIt (v3.3.3) is able to open it in parallel and render it. |
Beta Was this translation helpful? Give feedback.
-
I'm glad that you were able to get things working and performing reasonably well. To your original question about which plugin to use, I would say that any plugin that supports parallel reading of data should be sufficient, and many of our plugins fall into that category. It sounds like you found an acceptable one to use. As far as performance when looking at large datasets, that often can come down to the number of nodes and processors you ask VisIt to use to look at your data. You generally don't want there to be more processors than there are problem domains, and so I would recommend scaling up the node count while keeping processor number constant. You can also turn on scalable rendering mode if it isn't already on, although I would expect that it would be on by default for large datasets like you describe. If you're doing filled boundary plots you can choose to simplify heavily mixed zones to improve performance. There are other tips and tricks on improving performance as well. |
Beta Was this translation helpful? Give feedback.
Howdy!
Thanks for your response. I figured out that, if we use Chombo based output, we are able to visualize large datasets with millions of meshblocks. We tried upto 10 million meshblocks with 20 scalar outputs in a chombo hdf5 and VisIt (v3.3.3) is able to open it in parallel and render it.