You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering if the metrics used are documented somewhere. For example, for VideoMME, the metric appears to be accuracy (which aligns with the official paper), but the name of the metric in the output (VIDEOMME_perception) is a bit confusing.
Thanks in advance!
The text was updated successfully, but these errors were encountered:
So some of the pre-defined metrics (commonly used) are being registered in the api/metrics.py. Other tasks such as videomme has its own logic of calculating scores. You can find the scoring logic in their own task folder. Mostly in the utils.py. For example tasks/videomme/utils.py
Hello,
I was wondering if the metrics used are documented somewhere. For example, for VideoMME, the metric appears to be accuracy (which aligns with the official paper), but the name of the metric in the output (VIDEOMME_perception) is a bit confusing.
Thanks in advance!
The text was updated successfully, but these errors were encountered: