Hello,
I have 2 questions:
- I have seen that I run out of CUDA memory everytime I try to run this code on Ficus dataset. I am running them on A100s (80GB). Do we need to set different config for it?
- So for llff, dataset I tried to use the config from torch-ngp, but it doesn't give me consistent results. For eg, for blender dataset, all the exp get trained for 100 epochs, but for LLFF it seems to go over that, and I didn't see any perfect number for all the LLFF dataset.
So can you please let me know what config/arguments did you use for the ficus and LLFF datasets?
Hello,
I have 2 questions:
So can you please let me know what config/arguments did you use for the ficus and LLFF datasets?