This project performs real-time object detection and tracking using YOLOv11n and Deep SORT, either from a webcam or a video file. Detected objects are annotated with bounding boxes and persistent tracking IDs, and the results are saved as output videos.
The Output Video file is stored in "data/output" directory.
- ✅ Real-time object detection using Ultralytics YOLOv11n
- ✅ Object tracking using Deep SORT with appearance-based re-identification
- ✅ Supports webcam and video file inputs
- ✅ Automatically resizes for speed without losing tracking accuracy
.
├── main.ipynb # Entry point for detection and tracking
├── tracker.py # Deep SORT tracking class
├── yolo_detector.py # YOLOv8 detection class
├── models/
│ └── yolo11n.pt # YOLOv8n or custom trained model
├── data/
│ ├── test/people.mp4 # Example input video
│ └── output/ # Output folder for results
└── README.md # Project documentation-
Clone the repository
git clone https://github.com/TargetTactician/Mushroom_Classification.git cd Mushroom_Classification -
Create a virtual environment (optional)
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
python main.ipynbInside main.ipynb:
VIDEO_PATH = "data/test/people.mp4" # Use video file instead of webcam- YOLOv11n (nano) is used for speed. You can switch to
yolov11s.pt,yolov11m.pt, etc., for better accuracy. - The model supports 80 COCO classes (person, bottle, cell phone, laptop, etc.)
To list all supported classes:
print(detector.model.names)- Smart surveillance
- Traffic monitoring
- Industrial automation
- Retail analytics
- Detect objects using YOLOv8
- Extract bounding boxes and classes
- Feed them to Deep SORT for multi-object tracking
- Assign consistent track IDs across frames
- Visualize and save the output
- Parthi