Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YCBV results have discrepancy with the BOP 2019 leaderboard. #269

Open
eliphatfs opened this issue Dec 3, 2024 · 2 comments
Open

YCBV results have discrepancy with the BOP 2019 leaderboard. #269

eliphatfs opened this issue Dec 3, 2024 · 2 comments

Comments

@eliphatfs
Copy link

I ran run_ycb_video.py on the BOP 2019 test set. Compared to the BOP benchmark leaderboard, the AR VSD is a bit higher, but MSSD and MSPD are much worse.

Reproduced results using the official checkpoints:

{
    "bop19_average_recall": 0.6737796103161129,
    "bop19_average_recall_mspd": 0.5392190152801358,
    "bop19_average_recall_mssd": 0.6055299539170507,
    "bop19_average_recall_vsd": 0.876589861751152,
    "bop19_average_time_per_image": 1
}

BOP 2019 leaderboard results:

{
    "bop19_average_recall": 0.8890799579594146,
    "bop19_average_recall_mspd": 0.8803783652680087,
    "bop19_average_recall_mssd": 0.9305360174630124,
    "bop19_average_recall_vsd": 0.8563254911472229,
    "bop19_average_time_per_image": 9.94199268076155
}

I found that for objects that are symmetric in geometry but not symmetric in texture like cans in the first scene, the reproduced model sometimes gives a wrong rotation with large error (of 90 to 180 degrees) when the BOP 2019 submission result does not. Maybe related with #80.

I wonder if this is correct due to the missing diffusion augmentation in the released weights.

Also, I wonder what are the hyperparameters when submitting for the benchmark? I think on my 4090 it runs at about 1s/object but it is 9s on A100 in the BOP leaderboard so I think there is a discrepancy of hyperparameters.

Thank you in advance!

@OrestisVaggelis
Copy link

Hello, how did you run the evaluation after acquiring the .yml that is being produced?

@eliphatfs
Copy link
Author

I wrote this script to convert the output yaml to bop19 csv:

import yaml
import numpy
import calibur


def arr2str(x: numpy.ndarray):
    return ' '.join(map(str, x.reshape(-1).tolist()))


x: dict[object, dict[object, dict]] = yaml.safe_load(open("ycbv_res.yml"))
with open("fp_ycbv-test.csv", "w") as fo:
    fo.write("scene_id,im_id,obj_id,score,R,t,time\n")
    for video_id in x.keys():
        for frame_id in x[video_id].keys():
            for obj_id in x[video_id][frame_id]:
                pose = numpy.array(x[video_id][frame_id][obj_id])
                # pose = numpy.linalg.inv(calibur.convert_pose(numpy.linalg.inv(pose), calibur.CC.GL, calibur.CC.CV))
                r = pose[:3, :3]
                t = pose[:3, 3] * 1000
                fo.write(f"{int(video_id)},{int(frame_id)},{int(obj_id)},1.0,{arr2str(r)},{arr2str(t)},1.0\n")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants