Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mistake in eval_textvqa #72

Open
hedes1992 opened this issue Jun 2, 2024 · 3 comments
Open

Mistake in eval_textvqa #72

hedes1992 opened this issue Jun 2, 2024 · 3 comments

Comments

@hedes1992
Copy link

When I run the script sh ./scripts/eval/textvqa.sh, I found that the evaluation scripts seems wrong?

The model inference and save question_id for each image case in model_vqa_loader.py
, and compare the result vs ground-truth json in eval_textvqa.py.

I run this but get mistake, I find the reason might be: code A save "(image_id, question)" as key, but use the key in code B by "(question_id, question)". I think the code A and B 'key pair should be consistent

@jiajunlong
Copy link
Contributor

Could you please send your error message?

@hedes1992
Copy link
Author

Could you please send your error message?

The original error is :
image
but I have change to code to annotations = {(annotation['question_id'], annotation['question'].lower()): annotation for annotation in annotations}, so this mistake disappear.

By the way, I haven't find the file 'llava_textvqa_val_v051_ocr.jsonl' for textvqa eval, but only find the file 'TextVQA_0.5.1_val.json'. I just generate the former json from the latter by removing the answer

@jiajunlong
Copy link
Contributor

You can find llava_textvqa_val_v051_ocr.jsonl at eval.zip (after extracting it). You can then re-evaluate it with the new file to see if similar errors still occur.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants