You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Subtracting the min_precision and then scaling by (1.0-min_precision) will scale down precisions below 1.0.
If you consider the case where all interpolated precisions are greater than min_precision, the result of mean(precision) is not equal to mean(max(precision-min_precision, 0.0))/(1.0-min_precision).
Shouldn't the function just be mean(max(precision-min_precision, 0.0))+min_precision?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
@TimoRST when calculating precision in the context of nuScenes, the evaluation protocol is only interested in operating points where the precision is > 10% (you can check out more details in Section 3.1 of https://arxiv.org/pdf/1903.11027)
Hi there.
Recently I was rebuilding your metrics to evaluate models on custom data.
Whilst doing so, I stumbled upon the average precision calculation here: nuscenes-devkit/python-sdk/nuscenes/eval/detection/algo.py.
Subtracting the min_precision and then scaling by (1.0-min_precision) will scale down precisions below 1.0.
If you consider the case where all interpolated precisions are greater than min_precision, the result of mean(precision) is not equal to mean(max(precision-min_precision, 0.0))/(1.0-min_precision).
Shouldn't the function just be mean(max(precision-min_precision, 0.0))+min_precision?
Thanks in advance!
The text was updated successfully, but these errors were encountered: