You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have the following questions, looking forward to your answers.
1.Validation was carried out after every 100 iterations, and the model with the highest accuracy was selected to ensure repeatability. Would it be impossible to guarantee repeatability if you only verified it every 2500 times? Have you done similar experiments? thank you
2.If instead of verifying every 100 iterations, the early stop method is used, is the effect similar? thank you
3.In the own data set, the target domain data set is best divided into three parts: training, verification and testing?Modify as follows:
TRAIN_TARGET: ("cityscapes_foggy_train_cocostyle", "cityscapes_foggy_val_cocostyle"),TEST: ("cityscapes_foggy_test_cocostyle", )?
The text was updated successfully, but these errors were encountered:
Q1. Yes, the available results should be in the range from 42 to 44. The reported results are not the best ones in our experiments. For example, we have 45AP50 model for C2F with two–stage training. We use more validation only to ensure the easy reproductability for different experimenal environments in an end-to-end manner.
Q2. Actually, this will be similar for AP50 but not for the more strict metrics, e.g. AP. Training more iterations mainly improves the overall performance of the object detector.
Q3. For your own datasets, I think both settings are reasonable since some DAOD benchmarks have an extra test split.
@wymanCV
Thank you very much for your reply. I would like to ask if you have ever encountered that sometimes the maximum accuracy of AP50 saved is obviously 4 points lower than the previous accuracy when it is reproduced, and every 2500 iterations is also about 4 or 5 points lower than the previous iteration? What's going on here? Thank you. Sometimes this is normal when code is reproduced.
Hi, I haven't encountered such significant instability after training enough iterations. But I think it is normal for adaptative FRCNN in the early training stage, as shown here.
Hello, I have the following questions, looking forward to your answers.
1.Validation was carried out after every 100 iterations, and the model with the highest accuracy was selected to ensure repeatability. Would it be impossible to guarantee repeatability if you only verified it every 2500 times? Have you done similar experiments? thank you
2.If instead of verifying every 100 iterations, the early stop method is used, is the effect similar? thank you
3.In the own data set, the target domain data set is best divided into three parts: training, verification and testing?Modify as follows:
TRAIN_TARGET: ("cityscapes_foggy_train_cocostyle", "cityscapes_foggy_val_cocostyle"),TEST: ("cityscapes_foggy_test_cocostyle", )?
The text was updated successfully, but these errors were encountered: