Skip to content

Conversation

rolson24
Copy link

@rolson24 rolson24 commented Apr 15, 2025

Description

This is a PR implementing an evaluation framework for evaluating trackers. The framework allows user's to use any detector they want by writing a callback function that takes in an frame and expects a sv.Detections object to be returned. The evaluation framework can currently handle the MOT Challenge dataset format, and implements CLEAR metrics. The framework handles evaluating trackers not implemented in "Trackers" by allowing users to either provide a tracking callback function that takes in a frame and returns an sv.Detections object, or just provide a "Trackers" tracker object.

List any dependencies that are required for this change.

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Example usage:

import numpy as np
import supervision as sv
from rfdetr import RFDETRBase
from rfdetr.util.coco_classes import COCO_CLASSES
from trackers import SORTTracker
from trackers.eval import evaluate_tracker

model = RFDETRBase(device="mps")
tracker = SORTTracker()

mot_dataset_path = "path/to/MOT17/train"
mot_dataset = MOTChallengeDataset(dataset_path=mot_dataset_path)


def detection_callback(frame: np.ndarray, frame_info: str):
    detections = model.predict(frame, threshold=0.5)
    # With a different model, need to convert to sv.Detections
    return detections

results = evaluate_tracker(
    dataset=mot_dataset,
    detection_source=detection_callback  # Could also use mot_dataset to use public detections from MOTChallengeDataset
    tracker_source=tracker,
    metrics=["Count", "CLEAR"],  # Specify desired metrics
    cache_tracks=True,  # Example: cache the tracks
    cache_dir="./cached_tracks_sv",  # Example cache directory
)

print("\n--- Evaluation Results ---")
print(json.dumps(results, indent=2))

Example usage with tracker callback:

import numpy as np
import supervision as sv
from rfdetr import RFDETRBase
from rfdetr.util.coco_classes import COCO_CLASSES
from BoxMOT import ByteTrack
from trackers.eval import evaluate_tracker

model = RFDETRBase(device="mps")

mot_dataset_path = "path/to/MOT17/train"
mot_dataset = MOTChallengeDataset(dataset_path=mot_dataset_path)


def detection_callback(frame: np.ndarray, frame_info: str):
    detections = model.predict(frame, threshold=0.5)
    # With a different model, need to convert to sv.Detections
    return detections

# Example of a callback for other trackers
tracker_instance = ByteTrack()
def tracker_callback(detections: sv.Detections, frame: nd.ndarray, frame_info: Dict[str, Union[int, str]]) -> sv.Detections:
    global tracker_instance

    if (frame_info['frame_idx']==1):
        tracker_instance = ByteTrack()

    det_boxes = detections.xyxy
    det_scores = detections.confidence
    det_classes = detections.class_id if detections.class_id is not None else np.zeros(len(detections))

    boxmot_detections = np.concatenate((det_boxes, det_scores[:, None], det_classes[:, None]), axis=1)

    tracks = tracker_instance.update(boxmot_detections, frame)

    if tracks.shape[0] == 0:
        return sv.Detections.empty()


    return_tracks = sv.Detections(
        xyxy=tracks[:, :4],
        confidence=tracks[:, 5],
        class_id=tracks[:, 6],
        tracker_id=tracks[:, 4]
    )

    return return_tracks

results = evaluate_tracker(
    dataset=mot_dataset,
    detection_source=detection_callback  # Could also use mot_dataset to use public detections from MOTChallengeDataset
    tracker_source=tracker_callback,
    metrics=["Count", "CLEAR"],  # Specify desired metrics
    cache_tracks=True,  # Example: cache the tracks
    cache_dir="./cached_tracks_sv",  # Example cache directory
)

print("\n--- Evaluation Results ---")
print(json.dumps(results, indent=2))

I have verified the accuracy of the CLEAR metrics and the count metric against TrackEval in this colab notebook

Docs

  • Docs updated? What were the changes:

@CLAassistant
Copy link

CLAassistant commented Apr 15, 2025

CLA assistant check
All committers have signed the CLA.

pre-commit-ci bot and others added 27 commits April 15, 2025 17:43
@soumik12345
Copy link
Contributor

Fixes #5

@rolson24 rolson24 marked this pull request as ready for review April 21, 2025 16:34
@SkalskiP
Copy link
Collaborator

Holy smokes! @rolson24 this is awesome! Huge thanks for the PR! 🔥

I’ve got a small favor to ask. After discussing with @soumik12345, we realized we’ll need to ask you to split this PR into smaller chunks. We don’t want to rush through it—we’d rather take our time to properly review everything, add documentation, and include tests. It's really hard with massive PRs like this.

I suggest splitting it into three parts: datasets → metrics → eval framework. What do you think?

@rolson24
Copy link
Author

rolson24 commented Apr 23, 2025 via email

@SkalskiP
Copy link
Collaborator

@rolson24 no worries! Honestly, @soumik12345 and I have our hands full right now, so we're totally fine waiting until the weekend. Just one thing—please don’t open all three at once. Let’s take them one at a time.

@rolson24 rolson24 closed this Apr 25, 2025
@rolson24 rolson24 mentioned this pull request Apr 26, 2025
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants