Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about random_seed in voting strategy #16

Open
Roywangj opened this issue Nov 23, 2021 · 4 comments
Open

about random_seed in voting strategy #16

Roywangj opened this issue Nov 23, 2021 · 4 comments

Comments

@Roywangj
Copy link

hi, i write a voting_eval python program. It loads the trained model and test for M times, however, the result of everytime time is the same. I change the random seed but failed. Would you please open your voting_eval codes? Thanks!

@tiangexiang
Copy link
Owner

Hi, the voting script we used was identical to the ones used in RSCNN, therefore we didn't keep particular records of the script itself.
It is really weird that the voting results are the same every time. Can you please make sure:

  1. the averaged results (rather than the argmax index) indeed change across different trails?
  2. the inputs are indeed different for each vote? inputs should be translated or jittered randomly at each vote.
    Since we are essentially relying on randomness for different augmentations on the input data, and eventually, different results, the random seed is preferred not to be specified.
    Please keep me informed of this weird behavior!

@Roywangj
Copy link
Author

i use different seed and max result. Indeed, I think 'farthest = torch.randint(0, N, (B,), dtype=torch.long).to(device) * 0' is wrong. (in models/curvenet_utils.py). Could you please check the codes

@tiangexiang
Copy link
Owner

This is because, in FPS, we only want to start from the first index node (0th node) for eliminating any randomness. If this causes the problem in voting, maybe you can try to get rid of the trailing `* 0'.

@Roywangj
Copy link
Author

In addition, i failed to translate or jitter input, it indeed output different results, but results are very bad(about 5%). I check the RS-CNN code, i find they use the same "new_fps_idx" in training and testing process, while curvenet doesn't use it in training, but i use it in testing. Would you please tell me how you set the translate or jitter process in detail?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants