This repository contains the official code for the paper Egocentric Event-Based Vision for Ping Pong Ball Trajectory Prediction (paper). This paper has been accepted for publication at the IEEE Computer Vision and Pattern Recognition Workshop (CVPRW), Nashville, 2025. ©IEEE
If you use any of this code, please cite the following publication:
@Article{Alberico25cvprw,
author = {Ivan Alberico and Marco Cannici and Giovanni Cioffi and Davide Scaramuzza},
title = {Egocentric Event-Based Vision for Ping Pong Ball Trajectory Prediction},
journal = {Computer Vision and Pattern Recognition Workshop (CVPRW)},
year = {2025},
}
In this work, we present a real-time egocentric trajectory prediction system for table tennis using event cameras. Unlike standard cameras, which suffer from high latency and motion blur at fast ball speeds, event cameras provide higher temporal resolution, enabling more frequent state updates, greater robustness to outliers, and accurate trajectory predictions using just a short time window after the opponent’s impact. This is the first framework for egocentric table-tennis ball trajectory prediction using event cameras.
Follow these steps to set up the environment and run the project.
- Python 3.8 installed
- Git (optional, for cloning the project)
git clone https://github.com/uzh-rpg/event_based_ping_pong_ball_trajectory_prediction.git
cd event_based_ping_pong_ball_trajectory_prediction
Or download the ZIP and extract it.
python3.8 -m venv venv
-
Linux/macOS:
source venv/bin/activate
-
Windows:
venv\Scripts\activate
Make sure you're in the same directory as requirements.txt
:
pip install -r requirements.txt
Before running the pipeline, ensure your input data is organized as follows:
dataset_folder/
├── aria_recording.vrs
├── eye_gaze/
│ └── ... (eye gaze data files)
├── slam/
│ └── ... (SLAM/odometry files)
├── your_sequence_folder/
│ ├── config.yml
│ └── ... (other sequence files)
-
The folder you specify for evaluation (e.g.,
./data/game_sequence_test_1
) must be inside a dataset folder that also contains:aria_recording.vrs
eye_gaze/
slam/
-
This structure is required to load all necessary Aria information.
-
Example:
If you runpython3.8 ./perception_pipeline_ball.py ./data/game_sequence_test_1
then
./data/
should containaria_recording.vrs
,eye_gaze/
, andslam/
, andgame_sequence_test_1/
should containconfig.yml
. -
You can download the
dataset_folder
from: this link -
The
game_sequence_test_1
example can be unzipped fromdata/game_sequence_test_1.zip
.
- The pipeline uses a pre-trained DCGM model for trajectory prediction.
- Before running, extract
trained_DCGM_model.zip
(provided separately) into your/data
directory or another location of your choice. - In your
config.yml
(inside the sequence folder), set thepath_to_DCGM_model
parameter under theio
section to the extracted model path.
For every sequence, there is a config.yml
file inside the sequence folder. You can change parameters and settings for your experiments by editing this config file.