An autonomous drone control system using YOLOv8, MediaPipe, and DJI Tello SDK. This project enables a Tello drone to detect faces, follow them in real-time, and execute specific flight commands via hand gestures.
- Autonomous Face Tracking: Uses a YOLOv8-Face-Detection model for robust face tracking.
- Gesture Control: Command the drone using hand gestures (Takeoff, Flip, 360 Spin, Land, Photo/Video).
- Dynamic Device Selection: Automatically detects and uses NVIDIA GPU (CUDA) if available, otherwise falls back to CPU.
- Auto-Recording: Automatically saves photos and videos of the flight to the
captures/directory. - Real-time Visualization: Displays live drone feed with FPS, face bounding boxes, and gesture status.
| Gesture | Action |
|---|---|
| ONE (Pointer) | Start Video Recording π₯ |
| TWO (Peace) | Take a Photo πΈ |
| THREE | Move Forward Forward β¬οΈ |
| L_SHAPE | 360 Degree Spin π |
| ROCK | Drone Flip π€Έ |
| PINKY | Land Drone π¬ |
-
Clone the repository:
git clone https://github.com/YOUR_USERNAME/Tello-Autonomous-Face-Tracking.git cd Tello-Autonomous-Face-Tracking -
Install dependencies:
pip install -r requirements.txt
-
Connect to Tello Wi-Fi: Turn on your drone and connect your PC to the Tello-XXXXXX Wi-Fi network.
-
Run the script:
python Face_tracking_yolo.py
- Python: Core logic.
- OpenCV: Image processing and visualization.
- Ultralytics YOLOv8: High-performance face detection.
- MediaPipe: Hand landmark detection and gesture recognition.
- DJI Tello SDK (djitellopy): Drone communication and control.
- Supervision: Simplified computer vision tasks.
Developed by Muhammet