Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training #48

Open
zw-92 opened this issue Aug 20, 2024 · 2 comments
Open

training #48

zw-92 opened this issue Aug 20, 2024 · 2 comments

Comments

@zw-92
Copy link

zw-92 commented Aug 20, 2024

Hello, I want to do some fine-tuning based on your trained model and my own dataset. Since there is no label, I want to replace the coco20k dataset with my own dataset, or add some of my own dataset to it. Do you recommend "full training" or "training with existing weights"? Do I need to modify the training hyperparameters and strategies? Looking forward to your reply.

@guipotje
Copy link
Collaborator

Hello @zw-92, thank you for the interest in our work!

I believe you should test the two approaches, but I first recommend a fine-tuning just to see if XFeat improves on your image distribution. This should be faster, use less data, and you can get some experience in training the model. Then, you could just do a from-scratch training, as the model is quite small and you don't need high-end hardware. Regarding hyperparameters, I think the defaults are a good start, then you can test some variations based on the training dynamics.

@zw-92
Copy link
Author

zw-92 commented Aug 30, 2024

Thank you for your reply. I trained on MegaDepth and coco20k using the default parameters, but the results, when tested in the same scene with the weights you provided, show a difference of about 300 in the average number of matched feature points—one being 1500 and the other 1200. Do you know why this might be? I also fine-tuned using the original weights and default parameters, but the results were mediocre. After reducing the learning rate, the average number of matches began to approach the results of the original weights, occasionally even surpassing them by about 100 points. However, I lack ground truth data, making it difficult to assess. Do you have any suggestions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants