Replies: 5 comments
-
Hello, @Ali-Fayzi could you elaborate bit more why should we add this ? Also could you explain how is it effect you as well ? Because as for tracking go this is how we using it. import supervision as sv
from ultralytics import YOLO
import numpy as np
from supervision.assets import download_assets, VideoAssets
import cv2
model = YOLO("yolov8l.pt")
byte_tracker = sv.ByteTrack()
download_assets(VideoAssets.VEHICLES)
start_point = sv.Point(869, 1041)
end_point = sv.Point(2900, 1041)
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
line_zone_annotator = sv.LineZoneAnnotator(
thickness=2,
color=sv.Color.WHITE,
text_thickness=2,
text_color=sv.Color.BLACK,
text_scale=2.5,
text_offset=1.5,
text_padding=10,
display_in_count=False,
display_out_count=True,
)
trace_annotator = sv.TraceAnnotator()
line_zone = sv.LineZone(start_point, end_point)
def callback(frame: np.ndarray, index: int) -> np.ndarray:
results = model(frame)[0]
detections = sv.Detections.from_ultralytics(results)
detections = byte_tracker.update_with_detections(detections)
labels = [f"#{tracker_id}" for tracker_id in detections.tracker_id]
annotated_frame = bounding_box_annotator.annotate(
scene=frame.copy(), detections=detections
)
annotated_frame = label_annotator.annotate(
scene=annotated_frame, detections=detections, labels=labels
)
annotated_frame = trace_annotator.annotate(
scene=annotated_frame, detections=detections
)
line_zone.trigger(detections)
annotated_frame = line_zone_annotator.annotate(
frame=annotated_frame, line_counter=line_zone
)
return annotated_frame
sv.process_video(
source_path="vehicles.mp4", target_path="result.mp4", callback=callback
)
As you can see we have detections = byte_tracker.update_with_detections(detections) We do update with tracker information and send as detections and return detection and use the frame we use and annotate it so I did not see why do we need too add frame ? Since we already have frame and what problem do we solve in here ? |
Beta Was this translation helpful? Give feedback.
-
Hello @onuralpszr , thank you for your response. I am going to work on adding new tracking methods to Super Vision. For instance, in the Strong Sort , feature extraction from the frame is required. Therefore, by adding a second parameter |
Beta Was this translation helpful? Give feedback.
-
I understand and first of "update_with_detections" is function to belong the byterack and there is a another one in smoother but they both "seperated" function and there is no generic function in for "update_with_detections" so when you adding strong-short you can do differently and before I write my answer I checked why do you need frame for first you needed 2 things I notice and first is for extract h,w and secondly use cases of |
Beta Was this translation helpful? Give feedback.
-
I am also converting to discussion |
Beta Was this translation helpful? Give feedback.
-
In my new implementation, I removed the dependency on the "boxmot" library. You mentioned that you don't want to add pytorch as a dependency, but in practice, when you need to install the "ultralytics" library, you actually require the installation of pytorch as well. Source: https://github.com/ultralytics/ultralytics?tab=readme-ov-file#documentation Supervision+strongsort_tracker Notebook: https://colab.research.google.com/drive/1K1r4e2jYC8G6CgB2gD040n25XhLU0JG5 |
Beta Was this translation helpful? Give feedback.
-
Search before asking
Description
To create and add new tracking methods, please add the 'frame' parameter to the 'update_with_detections' function.
def update_with_detections(detections, frame):
Use case
No response
Additional
No response
Are you willing to submit a PR?
Beta Was this translation helpful? Give feedback.
All reactions