Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello, I have the following questions, looking forward to your answers. #36

Closed
sunshehai opened this issue Apr 9, 2023 · 3 comments
Closed

Comments

@sunshehai
Copy link

Hello, I have the following questions, looking forward to your answers.
1.Validation was carried out after every 100 iterations, and the model with the highest accuracy was selected to ensure repeatability. Would it be impossible to guarantee repeatability if you only verified it every 2500 times? Have you done similar experiments? thank you
2.If instead of verifying every 100 iterations, the early stop method is used, is the effect similar? thank you
3.In the own data set, the target domain data set is best divided into three parts: training, verification and testing?Modify as follows:
TRAIN_TARGET: ("cityscapes_foggy_train_cocostyle", "cityscapes_foggy_val_cocostyle"),TEST: ("cityscapes_foggy_test_cocostyle", )?

@wymanCV
Copy link
Contributor

wymanCV commented Apr 9, 2023

Hi, thanks for your interest!

Q1. Yes, the available results should be in the range from 42 to 44. The reported results are not the best ones in our experiments. For example, we have 45AP50 model for C2F with two–stage training. We use more validation only to ensure the easy reproductability for different experimenal environments in an end-to-end manner.
Q2. Actually, this will be similar for AP50 but not for the more strict metrics, e.g. AP. Training more iterations mainly improves the overall performance of the object detector.
Q3. For your own datasets, I think both settings are reasonable since some DAOD benchmarks have an extra test split.

@sunshehai sunshehai reopened this Apr 9, 2023
@sunshehai
Copy link
Author

@wymanCV
Thank you very much for your reply. I would like to ask if you have ever encountered that sometimes the maximum accuracy of AP50 saved is obviously 4 points lower than the previous accuracy when it is reproduced, and every 2500 iterations is also about 4 or 5 points lower than the previous iteration? What's going on here? Thank you. Sometimes this is normal when code is reproduced.

@wymanCV
Copy link
Contributor

wymanCV commented Apr 9, 2023

Hi, I haven't encountered such significant instability after training enough iterations. But I think it is normal for adaptative FRCNN in the early training stage, as shown here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants