Skip to content

Results using your code are different to the reported in the ICCV paper #68

@bobetocalo

Description

@bobetocalo

I have tried to obtain similar results to the ones reported in the HBB table:

Task2 - Horizontal Leaderboard

Approaches mAP PL BD BR GTF SV LV SH TC BC ST SBF RA HA SP HC
[R2CNN++] 75.35 90.18 81.88 55.30 73.29 72.09 77.65 78.06 90.91 82.44 86.39 64.53 63.45 75.77 78.21 60.11

I am using the validation set instead of the testing set because of the test annotations have not been released yed ... Could you provide the results with the validation set? Because they are too much worse compared to the ones that you report in the table ...

Am I missing something? I didn't change anything in your eval.py code .... But the mAP results are really dissappointing. I would like to know if someone has obtained similar results to the ones that the authors report.

Best,
Roberto Valle

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions