The goal of this project is to develop the tools and processes necessary to provide timely and reliable robot-relative game piece detection to the robot controller via WPILib NetworkTables. This is achieved by using an Oak-D camera and running the FRC4607 SpatialAI service on a Raspberry Pi.
This system supports two primary modes: Development and Competition.
🔧 Development Mode
- The FRC4607 Spatial AI service connects to NT4 using
host-spatial-ai.local, starts up on boot, and can be used view spatial inferencing and recording via NT control - Or SSH into the Raspberry Pi using
frc4607@frc4607-spatial-ai, stop the FRC4607 Spatial AI service, and runspatial-inferenceto view the stream - Or while SSH'd into the Raspberry Pi, run
recorderto capture video and save it to the attached USB drive for later playback
🏁 Competition Mode
- The FRC4607 Spatial AI service will connect to NT4 using team number 4607 and Inferencing is started during bootup
- Recording to the attached USB drive is triggered by start/stop signals received from the robot code
🐞 Debugging
- Use
replayon a laptop to play recorded video files and view inferencing results
- 🔧 Hardware
- 💻 Software
- 🍓 1. Setting Up the Raspberry Pi 4B
- 📸 2. Gathering the Training Images
- 🧹 3. Preparing the Training Images
- 🧠 4. Training the YOLO Model
- 🚀 5. Running the YOLO Model
This project uses the following hardware:
- B&W Stereo + color camera in one compact device
- Provides both object detection and 3D location (relative to the camera)
- Hosts the OAK-D Lite camera
- Runs inference and publishes results to NetworkTables
- Streams annotated images to CameraServer
- Captures video streams to a connected USB drive
The following software packages and tools are required:
- 🔗 Python 3 – Core runtime environment on the Raspberry Pi
- 🔗 pyntcore – NetworkTables client library
- 🔗 robotpy-cscore – RobotPy bindings for cscore image processing library
- 🔗 DepthAI & SDK – Interface to the OAK-D camera
- 🔗 Luxonis YOLO Converter – YOLOv5/v8
.blobconverter
📝 Use a custom Raspberry Pi 5 image with:
- Raspberry Pi OS Lite (64-bit, headless)
- SSH enabled
- Bluetooth, WiFi, and Audio HW disabled
- Several unused services disabled
- Setup this project and virtual environment
The following assumes the setup will be done from a Windows PC.
-
Download and install Raspberry Pi OS Lite to a micro SD card using Raspberry Pi Imager
- In "Edit Settings":
- Hostname:
frc4607-spatial-ai - Username:
frc4607 - Password:
frc4607
- Hostname:
- Under "Services":
✅ Enable SSH and password authentication
- In "Edit Settings":
-
Clone this repo to your PC
-
From a command prompt at the projects root, run
setup.batto setup the virtual environment -
From a PowerShell on your PC, run the command below to completely setup the Raspberry PI:
.\setup_pi.ps1 -User "yourname" -Email "[email protected]" -
For development and debug, install powershell convenience functions by running the following:
.\setup_pi_commands.ps1This will provide the following one-liners:servicestatusGet FRC4607 Spatial AI service statusservicestartStart Spatial AI serviceservicestopStop Spatial AI serviceservicerestartRestart Spatial AI serviceviewlogsView Spatial AI service logs (last 1000)followlogsView Spatial AI service logs (realtime)deletelogsDelete Spatial AI service logscopyrecordingsCopy USB recordings to local PCfixrecordingsFix USB recordings permissions (if needed)setcompmodeSet PI SPATIAL_AI_MODE env variable to comp modesetdevmodeSet PI SPATIAL_AI_MODE env variable to dev modesetreslowSet PI RESOLUTION env variable to lowsetresmedSet PI RESOLUTION env variable to medsetreshighSet PI RESOLUTION env variable to high
🔗 Raspberry Pi Docs
🔗 Embedded Pi Setup Resource
- Use the robot-mounted OAK-D Lite to capture all training data
- Gather the core dataset at the BRIC field (done prior to regional play):
- Vary the lighting, backgrounds, and robot poses
- Supplement the dataset using curated screenshots from match video gathered throughout the competition season
🎯 Goal: Create a focused, high-quality dataset
Remember - “Don't try to boil the ocean.”
The team will use our Roboflow - FRC4607 Workspace to manage the preperation of the training images.
- Annotate – Annotating images consists of drawing bounding boxes and assigning class labels 2006-REBUILT Annotation
- Format – The training images need to be eported in the YOLOv5 model format 2006-REBUILT Versions

- Update the ZIP - Replace the relative folders with absolute ones in
data.yaml. For example, replace the following:train: ../train/imagesval: ../valid/imagestest: ../test/imageswithtrain: /content/<zip_file_name>/valid/imagesval: /content/<zip_file_name>/valid/imagestest: /content/<zip_file_name>/valid/images
🔗 Ultralytics Data Annotation Guide
We use YOLOv5 to train models on our dataset. As the dataset grows throughout the season, we continuously retrain. Training is done using Google Colab, with outputs saved to Google Drive. The GitHub auto-commit feature uses a GitHub token for uploads.
- Create a folder named
Google Colabat the root of the Google Drive - Add a file named
github_token.txtinside that folder (the contents must be the GitHub token)
-
Run cells and monitor progress actively
-
Once training finishes, the
.ptfile is saved to themodels/directory -
Convert the PyTorch model using the Luxonis Model Converter

-
Download and extract the
results.zipto the same directory as your.ptfile

⚠️ Ignore deprecation warning and do not use Luxonis HubAI for now. The DepthAI SDK v2 still depends on the older format.
🚧 Coming soon: Real-time inference with DepthAI SDK and NetworkTables messaging.


