-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
descriptor loss #78
Comments
Hi, can you share the tensorboard curves of the different losses? Are the other losses decreasing or not? |
Sorry, I don't save the tensorboard curves now. I just trained it for several hours and I use the sold2_synthetic.tar as the pre-train model. i train it on wire-frame datasets(exported) .The other losses can decrease. But the descriptor loss can not decrease.
The full train process spends too much time. so i don't train it fully. I want to to know whether it is right.
…------------------ Original ------------------
From: ***@***.***>;
Date: Wed, Mar 15, 2023 09:25 AM
To: ***@***.***>;
Cc: ***@***.***>; ***@***.***>;
Subject: Re: [cvg/SOLD2] descriptor loss (Issue #78)
Hi, can you share the tensorboard curves of the different losses? Are the other losses decreasing or not?
On which dataset are you training also?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Did you change anything in the code of the main branch? |
The exported dataset looks good. Did you modify the field 'return_type' in config/wireframe_dataset.yaml to 'paired_desc', as requested in the ReadMe? |
In our original work, we first did step 4 of the Readme, before proceeding to step 5. But as I understand it, you skipped step 4 (i.e. training the detector only on the Wireframe dataset), right? Maybe that is why the descriptor loss is stuck, you need to pre-train the network with step 4, then fine-tune it with the descriptor in step 5. The value of 1 displayed in your descriptor loss is basically the margin used in the triplet loss. In my experience, it is common that when training a network with triplet loss, the descriptor loss can stay stuck at this margin value (1 in your case) for a while. But if you keep training, the network eventually manage to improve and to go below this margin value. So as a conclusion, I would first pre-train the network with step 4, then try step 5 and train long enough to let time to the network to go below the margin value. |
Thanks for your explanation,. I will do the step 4 and then do the step 5. Have a good week! |
Hello, when i trained the full pipeline, the descriptor loss can not decreased. it was about 1 all time. why is this? Can you help me? Thank you.
The text was updated successfully, but these errors were encountered: