Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question/potential bug] Scaling during average precision calculation #1138

Open
TimoRST opened this issue Feb 4, 2025 · 1 comment
Open

Comments

@TimoRST
Copy link

TimoRST commented Feb 4, 2025

Hi there.
Recently I was rebuilding your metrics to evaluate models on custom data.
Whilst doing so, I stumbled upon the average precision calculation here: nuscenes-devkit/python-sdk/nuscenes/eval/detection/algo.py.

Subtracting the min_precision and then scaling by (1.0-min_precision) will scale down precisions below 1.0.
If you consider the case where all interpolated precisions are greater than min_precision, the result of mean(precision) is not equal to mean(max(precision-min_precision, 0.0))/(1.0-min_precision).
Shouldn't the function just be mean(max(precision-min_precision, 0.0))+min_precision?

Thanks in advance!

@whyekit-motional
Copy link
Collaborator

@TimoRST when calculating precision in the context of nuScenes, the evaluation protocol is only interested in operating points where the precision is > 10% (you can check out more details in Section 3.1 of https://arxiv.org/pdf/1903.11027)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants