-
Notifications
You must be signed in to change notification settings - Fork 207
Feat: Evaluation framework #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…24/trackers into feat/evaluation_framework
…24/trackers into feat/evaluation_framework
…24/trackers into feat/evaluation_framework
…24/trackers into feat/evaluation_framework
…24/trackers into feat/evaluation_framework
…24/trackers into feat/evaluation_framework
…24/trackers into feat/evaluation_framework
…24/trackers into feat/evaluation_framework
…24/trackers into feat/evaluation_framework
Fixes #5 |
…24/trackers into feat/evaluation_framework
…24/trackers into feat/evaluation_framework
Holy smokes! @rolson24 this is awesome! Huge thanks for the PR! 🔥 I’ve got a small favor to ask. After discussing with @soumik12345, we realized we’ll need to ask you to split this PR into smaller chunks. We don’t want to rush through it—we’d rather take our time to properly review everything, add documentation, and include tests. It's really hard with massive PRs like this. I suggest splitting it into three parts: datasets → metrics → eval framework. What do you think? |
Sounds good! I won’t have time to work on this until this weekend, but it
shouldn’t be too hard to break it down into multiple part.
…On Wed, Apr 23, 2025 at 2:47 AM Piotr Skalski ***@***.***> wrote:
Holy smokes! @rolson24 <https://github.com/rolson24> this is awesome!
Huge thanks for the PR! 🔥
I’ve got a small favor to ask. After discussing with @soumik12345
<https://github.com/soumik12345>, we realized we’ll need to ask you to
split this PR into smaller chunks. We don’t want to rush through it—we’d
rather take our time to properly review everything, add documentation, and
include tests. It's really hard with massive PRs like this.
I suggest splitting it into three parts: datasets → metrics → eval
framework. What do you think?
—
Reply to this email directly, view it on GitHub
<#7 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AX2EJPETTDZEQFR3YZH25JL224ZQHAVCNFSM6AAAAAB3GH3LHOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQMRTGIZTSNJRG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
*SkalskiP* left a comment (roboflow/trackers#7)
<#7 (comment)>
Holy smokes! @rolson24 <https://github.com/rolson24> this is awesome!
Huge thanks for the PR! 🔥
I’ve got a small favor to ask. After discussing with @soumik12345
<https://github.com/soumik12345>, we realized we’ll need to ask you to
split this PR into smaller chunks. We don’t want to rush through it—we’d
rather take our time to properly review everything, add documentation, and
include tests. It's really hard with massive PRs like this.
I suggest splitting it into three parts: datasets → metrics → eval
framework. What do you think?
—
Reply to this email directly, view it on GitHub
<#7 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AX2EJPETTDZEQFR3YZH25JL224ZQHAVCNFSM6AAAAAB3GH3LHOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQMRTGIZTSNJRG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@rolson24 no worries! Honestly, @soumik12345 and I have our hands full right now, so we're totally fine waiting until the weekend. Just one thing—please don’t open all three at once. Let’s take them one at a time. |
Description
This is a PR implementing an evaluation framework for evaluating trackers. The framework allows user's to use any detector they want by writing a callback function that takes in an frame and expects a sv.Detections object to be returned. The evaluation framework can currently handle the MOT Challenge dataset format, and implements CLEAR metrics. The framework handles evaluating trackers not implemented in "Trackers" by allowing users to either provide a tracking callback function that takes in a frame and returns an sv.Detections object, or just provide a "Trackers" tracker object.
List any dependencies that are required for this change.
Type of change
Example usage:
Example usage with tracker callback:
I have verified the accuracy of the CLEAR metrics and the count metric against TrackEval in this colab notebook
Docs