Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate Training data #6

Open
heiheishuang opened this issue Oct 30, 2023 · 3 comments
Open

Generate Training data #6

heiheishuang opened this issue Oct 30, 2023 · 3 comments

Comments

@heiheishuang
Copy link

Hello, thank you very much for your code. I am very interested in your work and I have tried using evaluate.py to generate nuScenes training data. But, I have noticed that its speed is a bit slow (only one GPU is in use). May I ask if there is any solution?

I tried to modify a single GPU to multiple GPUs, but failed: all GPUs are processing the same scene. May I know how to modify this code? Looking forward to your reply!

@heiheishuang
Copy link
Author

Hello, I often get IndexError: list index out of range errors when generating NuScenes training data. Usually, they look like this
Screenshot from 2023-10-31 14-18-01 Is my situation normal?

It looks like it's because of scene_depth_results is empty. And the code about updating 'scene_depth_results':
Screenshot from 2023-10-31 14-21-35
How should I modify this code?

@AronDiSc
Copy link
Collaborator

AronDiSc commented Nov 3, 2023

Hi @heiheishuang , thank you for your interest in our work and sorry for the delayed response.

About the speed of generating the data: I've just used a single GPU to generate the data and it took me a couple of days. I guess if you have multiple GPUs available you could try to modify the code such that you can specify which gpu should be used. Then you could divide all nuScenes scenes into n split-files and n config-files where each file would look as follows:

nuScenes_split_gpu0.txt:

scene-0000
scene-0000
...

dataset_generation_nuscenes_gpu0.yaml:

...
datasets:
  validation:
     split: [nuScenes_split_gpu0.txt]
...
...

...

you could then run the script simultaneously multiple times for each available gpu (assuming you also have sufficient memory):

python evaluate.py --config <path_to_config>/dataset_generation_nuscenes_gpu0.yaml

About the error you've encountered: Sorry for the trouble this caused. For the specific scenes where this error occurs, the algorithm fails to do any reasonable scene reconstruction (e.g. because the scene is static) and thus no estimations are accumulated. This in itself is not an issue because we do not want to train on such sequences anyway. It can be fixed by simply changing L174 in eval_dataset.py to ...

if 'depth' in self.metrics and len(scene_depth_results) >= 1:

... i.e. we skip evaluation for scenes where we do not have any predictions. I've changed the code accordingly.

Note about NuScenes training: If you generate the data for NuScenes from scratch I suggest you remove L138 in NuScenesDataset.py. We actually had a bug in our training where the intrinsics we used for NuScenes were slightly off. Thus this line is there to get the same results as in the paper. You might be able to achieve better results by using the correct NuScenes intrinsics.

@VergettLee
Copy link

Hello, I have modified this code: if 'depth' in self.metrics and len(scene_depth_results) >= 1:
But a new error has appeared.
QQ图片20240309112155
"I'm training on the DDAD dataset, but encountered an error similar to the one shown in the image. Initially, I suspected that the scene 000199 in my dataset was downloaded incorrectly. So, as shown in the image, I used scene 000198 as the last one, but still encountered the error 'IndexError: index 3906 is out of bounds for dimension 0 with size 3900'. What should I do to modify the code?"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants