Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about repeatability metric for line segment detection #90

Open
LukeJia47 opened this issue Jan 9, 2024 · 8 comments
Open

Comments

@LukeJia47
Copy link

Hi! I modified the training code of SOLD2 into a multi-GPU version. The repeatability metric of the model trained on the synthetic dataset is similar to or even better than the one in your publicly released sold2_synthetic.tar. However, the repeatability metric for the model trained on the Wireframe dataset is much lower than the one in your publicly released sold2_wireframe.tar. I'm not sure where the issue might be. Could you please help me check whether it's a problem with my training approach?

Firstly, I trained the model with fixed loss weights set to 1. During training on the synthetic dataset, with a batch size of 16 and other settings kept consistent, I trained for 30 epochs on two RTX 2080 Ti GPUs. The Rep-5 metric for structural distance obtained on the Wireframe dataset was approximately 0.351. In comparison, the publicly released sold2_synthetic.tar achieved a metric of 0.300 on our machine.
For training the detector on the Wireframe real dataset, a batch size of 10 was used, and other settings remained consistent. The model was fine-tuned for 200 epochs on two GPUs, with an interim model around the 90th epoch achieving the highest metric of approximately 0.508. However, your publicly released model sold2_wireframe.tar achieved a metric of 0.587 on our machine.

Next, I conducted training using learnable loss weights. In the training on the synthetic dataset, the settings remained consistent with the previous setup, and the obtained metric was approximately 0.315. For training on the Wireframe real dataset, the achieved metric was 0.505.

Could you please help me identify where the issue lies in the training approach and suggest ways to improve the repeatability metric of the model?

@rpautrat
Copy link
Member

Hi, in your trainings, did you train the full model with descriptors + detector, or the detector only? Training with the descriptor can slightly decrease the performance of the detector.

Furthermore, did you activate the optional candidate suppression (flag "use_candidate_suppression" in the export config) or not?

Finally, did you tune the probability threshold of the line heatmap? This should probably be adjusted for your newly trained model.

@LukeJia47
Copy link
Author

Thank you for your replying. Yes, after training the detector with fixed loss weights, I further fine-tuned the full model (descriptors + detector). The batch size was set to 4, and I trained it for 20 epochs. The obtained metric is 0.515.

Yes, I activate the candidate suppression(I kept the default setting in the export config for this option, which is True).

No, I haven't made any adjustments. Are you referring to modifying this parameter in "train_detector.yaml" or "export_line_features.yaml", and would changing it have an impact on training? Could you please guide me on how to modify this parameter?

@rpautrat
Copy link
Member

Good, then you can indeed modify the parameter "prob_thresh" inside export_line_features.yaml to tune it better to your model. No need of retraining. Ideally, you should do this tuning on a validation set that is different from the test set.

@LukeJia47
Copy link
Author

LukeJia47 commented Jan 11, 2024

Thank you for your reply. I would like to ask where this parameter is used?I noticed that it doesn't seem to be utilized in the process of evaluating repeatability metrics in the "sold2_dev_code" you shared earlier. Also, it seems that it is not used in the detection process carried out by LinDetector in "line_detector.py."

Oh, by the way, could you share some details about your experiments from Step 4 to Step 5?

@rpautrat
Copy link
Member

Sorry, I meant the parameter "detect_thresh", which is used in line_detection.py. You can tune it at test time when exporting the lines.

What exactly do you want to know about steps 4 and 5?

@LukeJia47
Copy link
Author

Hi, I used different combinations of "detect_thresh" and "inlier_thresh" from the "sold2_dev_code" you shared to measure the metrics and selected the best result among them. Therefore, I believe the issue might be with the model itself.

I would like to understand the settings for learning rate and the number of epochs in Step 4 and Step 5. I am unsure if there is an issue with my training approach. Could you please share the details? Thank you.

@rpautrat
Copy link
Member

We used a learning rate of 0.0005 and Step 5 was trained for 20 epochs (roughly 24h on a single GPU). But you can simply stop the training whenever the loss and metrics have converged. I don't have the numbers for Step 4 here, but I guess that it was roughly a similar number of epochs.

@LukeJia47
Copy link
Author

Thank you very much for your response and assistance. I'll try training again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants