What GPU Specs matter? #2727
Replies: 3 comments
-
Probably not so helpful, but here are my thoughts: I hope the 2025 version will be released within the next weeks (Progress at 99% https://github.com/alicevision/Meshroom/milestone/16). It will bring a lot of change, including https://github.com/meshroomHub with new AI Addins. There is also colmap and micmac integration. In the past, tensor cores were not that relevant, but with new AI plugins, this may change. There are no detailed benchmarks. I know a few papers that compared computation speed of Meshroom with other software, but not the stats of cpu, gpu, ram utilisation during the steps or comparison with different setups. At the current version (pre2025 release), GPU can be used in Feature Extraction and is heavily used in the DepthMap node. Based on this post #806 running Meshroom on Linux may give up to 20% performance boost vs running on Windows. I find this plausible, as I have similar experience with other software I run on Linux vs windows, which is usually based in Linux being faster handling files. Since you are planning a render server I assume you will be using Linux anyway. For my version to version "benchmarks" alicevision/AliceVision#481 (comment) I use the Monstree dataset https://github.com/alicevision/dataset_monstree (mini6 - to save time ;) ) @ i7 2.9GHz, GTX1070 8GB, 32GB Ram My personal bottleneck is usually my CPU with 8 logical cores and RAM depending on the number of images. There are a handful of people running Meshroom on HPC/renderfarms, but only limited specs were shared and no performance stats. HPC: #1331 Most knowledge is outdated, including mine in lack of having a Workstation/HPC at hand. |
Beta Was this translation helpful? Give feedback.
-
Good questions! VRAM is the most important thing to look out for. Once the dataset fits into the VRAM then the bandwidth becomes relevant. You mentioned GPUs with over 48GB VRAM. When your GPU has a big VRAM it usually has enough CUDA cores, too. I personally have not seen more than 19GB VRAM allocated at 100% GPU usage and that was with a brain dead load test ... Tensor cores, as natowi said, they are currently not relevant. This may change with new AI/interference nodes in future releases. At the danger of stating the obvious, consumer cards have display ports. This is a waste of power and run time cost. Data center GPUs don't have display ports and are designed for 24/7 workloads. Also, the form factor of data center GPUs allows you to stack more of them into a server. Meshroom doesn't really make a difference here. Well, Meshroom actually does use multiple GPUs for a single job. It takes the number of GPU's and starts a thread on each of the available GPUs. If you start a second job, or container, the same happens and the second job might crash when it can not allocate enough memory. It depends on your VRAM size and dataset how many jobs you can run before running out of memory. At the danger of stating the obvious: you can orchestrate the GPU resources with k8s, or run one docker per GPU, or tell Meshroom (ngpu flag) to only use one GPU, or make sure that the feature point extraction node and depth map node are not running at the same time, or just let the nodes crash and resume computation after the crash (the node architecture enables you to do so). Good luck! |
Beta Was this translation helpful? Give feedback.
-
The next release will add support for various new methods via plugin, so you may want to consider the 3rd party tool prerequisites if you want to use them:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I'm currently planning to build a dedicated render server for Meshroom, which will run rendering jobs using Meshroom CLI in a dockerized queue-based system. I'm aiming for high-throughput photogrammetry processing (multiple jobs in parallel or larger single-scene reconstructions), and I'd like to make sure the GPUs I choose are well-suited for this use case.
I'm wondering what GPU specs matter most for Meshroom and AliceVision performance. Obviously, CUDA support is required, but I'm trying to dig deeper. Some specific questions I have:
What GPU characteristics most impact Meshroom performance?
Is there any advantage to using workstation cards like the RTX A6000 or would consumer GPUs like the RTX 5090/4090 perform just as well (or better) for Meshroom?
Has anyone tested setups with multiple GPUs and job-level parallelization?
Does anyone know of standardized Meshroom benchmarks or typical datasets people use to compare GPU performance?
I'd really appreciate any advice from people who’ve built similar setups or have hands-on experience benchmarking different cards. Also open to hearing if certain cards (e.g., RTX 5090 vs A6000 vs H100) behave better in real-world projects than synthetic benchmarks suggest.
Thanks in advance for your insights!
Beta Was this translation helpful? Give feedback.
All reactions