Skip to content

uzh-rpg/event_based_ping_pong_ball_trajectory_prediction

Repository files navigation

Egocentric Event-Based Vision for Ping Pong Ball Trajectory Prediction

Pipeline Example

This repository contains the official code for the paper Egocentric Event-Based Vision for Ping Pong Ball Trajectory Prediction (paper). This paper has been accepted for publication at the IEEE Computer Vision and Pattern Recognition Workshop (CVPRW), Nashville, 2025. ©IEEE

If you use any of this code, please cite the following publication:

@Article{Alberico25cvprw,
  author  = {Ivan Alberico and Marco Cannici and Giovanni Cioffi and Davide Scaramuzza},
  title   = {Egocentric Event-Based Vision for Ping Pong Ball Trajectory Prediction},
  journal = {Computer Vision and Pattern Recognition Workshop (CVPRW)},
  year    = {2025},
}

📝 Project Overview

In this work, we present a real-time egocentric trajectory prediction system for table tennis using event cameras. Unlike standard cameras, which suffer from high latency and motion blur at fast ball speeds, event cameras provide higher temporal resolution, enabling more frequent state updates, greater robustness to outliers, and accurate trajectory predictions using just a short time window after the opponent’s impact. This is the first framework for egocentric table-tennis ball trajectory prediction using event cameras.


🛠️ Setup Instructions

Follow these steps to set up the environment and run the project.

📦 Requirements

  • Python 3.8 installed
  • Git (optional, for cloning the project)

1. Clone the Repository

git clone https://github.com/uzh-rpg/event_based_ping_pong_ball_trajectory_prediction.git
cd event_based_ping_pong_ball_trajectory_prediction

Or download the ZIP and extract it.

2. Create a Virtual Environment

python3.8 -m venv venv

3. Activate the Virtual Environment

  • Linux/macOS:

    source venv/bin/activate
  • Windows:

    venv\Scripts\activate

4. Install Dependencies

Make sure you're in the same directory as requirements.txt:

pip install -r requirements.txt

5. Run the Project

Input Folder Structure

Before running the pipeline, ensure your input data is organized as follows:

dataset_folder/
├── aria_recording.vrs
├── eye_gaze/
│   └── ... (eye gaze data files)
├── slam/
│   └── ... (SLAM/odometry files)
├── your_sequence_folder/
│   ├── config.yml
│   └── ... (other sequence files)
  • The folder you specify for evaluation (e.g., ./data/game_sequence_test_1) must be inside a dataset folder that also contains:

    • aria_recording.vrs
    • eye_gaze/
    • slam/
  • This structure is required to load all necessary Aria information.

  • Example:
    If you run

    python3.8 ./perception_pipeline_ball.py ./data/game_sequence_test_1

    then ./data/ should contain aria_recording.vrs, eye_gaze/, and slam/, and game_sequence_test_1/ should contain config.yml.

  • You can download the dataset_folder from: this link

  • The game_sequence_test_1 example can be unzipped from data/game_sequence_test_1.zip.

Model Path Configuration

  • The pipeline uses a pre-trained DCGM model for trajectory prediction.
  • Before running, extract trained_DCGM_model.zip (provided separately) into your /data directory or another location of your choice.
  • In your config.yml (inside the sequence folder), set the path_to_DCGM_model parameter under the io section to the extracted model path.

Running the Pipeline

For every sequence, there is a config.yml file inside the sequence folder. You can change parameters and settings for your experiments by editing this config file.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages