Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The results of validation in distill. py and evaluate.py are different #90

Open
Rickilous opened this issue Oct 17, 2024 · 0 comments
Open

Comments

@Rickilous
Copy link

Rickilous commented Oct 17, 2024

There is a significant difference between the validation results obtained during the execution of distill.by during training and the validation results obtained after executing evaluate.py after training.
I am reproducing your code on the Scannet dataset.

The following is the final result of executing evaluate.py
evaluating 49540568 points...
classes IoU

wall : 0.737 (9711412/13169998)
floor : 0.888 (8325308/9374070)
cabinet : 0.458 (798335/1743003)
bed : 0.697 (684456/981554)
chair : 0.710 (2863996/4032050)
sofa : 0.624 (605267/969955)
table : 0.523 (1083707/2070202)
door : 0.446 (909689/2041809)
window : 0.516 (974451/1888683)
bookshelf : 0.675 (975975/1445593)
picture : 0.174 ( 44180/253470)
counter : 0.404 (118225/292895)
desk : 0.451 (404440/896387)
curtain : 0.547 (445784/814689)
refrigerator : 0.413 (108950/264016)
shower curtain: 0.000 ( 0/158208)
toilet : 0.802 ( 99836/124456)
sink : 0.497 ( 61659/124068)
bathtub : 0.601 ( 58872/97972 )
otherfurniture: 0.218 (491901/2252587)
Mean IoU 0.5191339645582693
Mean Acc 0.6294690727698209

The following excerpt is from the training log of distill.by, and the model at the 90th epoch is the best model.
[2024-10-14 01:10:04,873 distill.py line 465] Val result: mIoU/mAcc/allAcc 0.4345/0.5428/0.7595.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant