Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

descriptor loss #78

Open
Hjy0801 opened this issue Mar 15, 2023 · 8 comments
Open

descriptor loss #78

Hjy0801 opened this issue Mar 15, 2023 · 8 comments

Comments

@Hjy0801
Copy link

Hjy0801 commented Mar 15, 2023

Hello, when i trained the full pipeline, the descriptor loss can not decreased. it was about 1 all time. why is this? Can you help me? Thank you.

@rpautrat
Copy link
Member

Hi, can you share the tensorboard curves of the different losses? Are the other losses decreasing or not?
On which dataset are you training also?

@Hjy0801
Copy link
Author

Hjy0801 commented Mar 15, 2023 via email

@rpautrat
Copy link
Member

Did you change anything in the code of the main branch?
Did you try visualizing the exported dataset with this notebook to check that everything was fine?

@Hjy0801
Copy link
Author

Hjy0801 commented Mar 15, 2023

I did't change any code and check the exported dataset. I operated the command of step 5 and used the sold2_synthetic.tar as the pre-train model.
image
This is the train process. descriptor_loss always is 1.
image
This is the train datasets.

@rpautrat
Copy link
Member

The exported dataset looks good.

Did you modify the field 'return_type' in config/wireframe_dataset.yaml to 'paired_desc', as requested in the ReadMe?

@Hjy0801
Copy link
Author

Hjy0801 commented Mar 15, 2023

yes,I did. when i use the sold2_wireframe.tar as pre-train model. the descriptor_loss are very low and the descritor_loss changed normally.
image

@rpautrat
Copy link
Member

In our original work, we first did step 4 of the Readme, before proceeding to step 5. But as I understand it, you skipped step 4 (i.e. training the detector only on the Wireframe dataset), right? Maybe that is why the descriptor loss is stuck, you need to pre-train the network with step 4, then fine-tune it with the descriptor in step 5.

The value of 1 displayed in your descriptor loss is basically the margin used in the triplet loss. In my experience, it is common that when training a network with triplet loss, the descriptor loss can stay stuck at this margin value (1 in your case) for a while. But if you keep training, the network eventually manage to improve and to go below this margin value.

So as a conclusion, I would first pre-train the network with step 4, then try step 5 and train long enough to let time to the network to go below the margin value.

@Hjy0801
Copy link
Author

Hjy0801 commented Mar 15, 2023

Thanks for your explanation,. I will do the step 4 and then do the step 5. Have a good week!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants