Skip to content

Official Code for "Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction (CVPR 2024)"

License

Notifications You must be signed in to change notification settings

InhwanBae/LMTrajectory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Can $\large{\color{Orange}{\textbf{\textsf{Language}}}}$ Beat $\large{\color{MidnightBlue}{\textbf{\textsf{Numerical Regression}}}}$?
Language-Based Multimodal Trajectory Prediction

Inhwan Bae · Junoh Lee · Hae-Gon Jeon
CVPR 2024

Project Page CVPR Paper Source Code Related Works



Traditional vs. Our language-based trajectory prediction, LMTraj.


This repository contains the code for the LMTrajectory framework.
TL;DR: Language model-based, Multimodal input, Multimodal output, Multi-task training approach for Zero-shot and Supervised human trajectory prediction.


💬 LMTrajectory Framework 🗨️

  • Prompt-Based Approach: Moving away from conventional numerical regression models, we reframe the task into a prompt-based question-answering perspective.
  • Social Reasoning: Beyond physics-based mathematical interaction modeling, our approach leverages language models to incorporate social reasoning.
  • Multi-Task Training: Supplementary tasks enhance the model's ability to grasp higher-level context through multi-task training.
  • Numerical Tokenizer: Our numerical tokenizer effectively separates text and numbers, enabling the model to learn correlations in sequential data.
  • SOTA Performance: Our holistic solution achieves state-of-the-art results on trajectory prediction benchmarks traditionally dominated by numerical regressors.

❄️ Zero-Shot Evaluation ❄️

Setup

Environment
All models were tested on Ubuntu 20.04 with Python 3.10 and PyTorch 2.0.1 with CUDA 11.7. Dependencies include Python packages such as scipy, simdkalman and openai==0.28.0.

Dataset
Preprocessed ETH and UCY datasets are released in this repository. The train/validation/test splits are the same as those fond in Social-GAN.

Sample
We provide our zero-shot prediction results in the release section. These results include all multimodal trajectories and are available for use in future zero-shot research.

Evaluate LMTraj-ZERO

Preliminary
To evaluate our LMTraj-ZERO model, you will need an OPENAI_API_KEY to access the OpenAI API. Create the API key using the instruction provided by OpenAI, and then paste the key into ./zero-shot/chatgpt_trajectory_predictor_v3.py line 25.

Prediction
We provide scripts to evaluate our LMTraj-ZERO model for all datasets simultaneously. Two scripts are provided in ./zero-shot/chatgpt_sequential_v3.sh and ./zero-shot/chatgpt_multi_v3.sh. The former script is used to evaluate our model step-by-step, and the latter script is used to evaluate our model with a thread pool for faster inference.

# Choose one of the following scripts to evaluate our LMTraj-ZERO model.
./chatgpt_sequential_v3.sh -d <DATASET_ID> -m <LLM_MODEL_ID>
./chatgpt_multi_v3.sh -d <DATASET_ID> -m <LLM_MODEL_ID>

# Supported dataset id: 0 (ETH), 1 (HOTEL), 2 (UNIV), 3 (ZARA1), 4 (ZARA2)
# Supported llm model id: 0 (gpt-3.5-turbo-0301), 1 (gpt-4-0314), 2 (gpt-3.5-turbo-1106), 3 (gpt-4-1106-preview)

# Examples
cd zero-shot
./chatgpt_multi_v3.sh -d 0 -m 3
./chatgpt_multi_v3.sh -d 1 -m 3

If an error is encountered, your progress will be saved. When you rerun the same script, it will skip the parts that were successfully executed and only regenerate the paths where issues occurred.

If you want to run the model with custom hyperparameters or other models available by OpenAI, use ./zero-shot/chatgpt_trajectory_predictor_v3.py instead of the script file.
Warning: A misclick could upgrade you to OpenAI Tier 5, as it did for me :(

Evaluation
As the final step, we provide code to evaluate the trajectories generated by our LMTraj-ZERO. To evaluate, first combine the predicted trajectories into a single JSON file.

python ./zero-shot/chatgpt-fragmented_dump_combiner.py --dataset <DATASET_ID> --model <LLM_MODEL_ID>

# Supported dataset id: 0 (ETH), 1 (HOTEL), 2 (UNIV), 3 (ZARA1), 4 (ZARA2)
# Supported llm model id: 0 (gpt-3.5-turbo-0301), 1 (gpt-4-0314), 2 (gpt-3.5-turbo-1106), 3 (gpt-4-1106-preview)

# Examples
python ./zero-shot/chatgpt-fragmented_dump_combiner.py --dataset 0 --model 3
python ./zero-shot/chatgpt-fragmented_dump_combiner.py --dataset 1 --model 3

Next, evaluate the combined trajectories using ADE and FDE metrics.

python ./zero-shot/compute_ade_fde_from_dump.py --dataset <DATASET_ID> --model <LLM_MODEL_ID>

# Supported dataset id: 0 (ETH), 1 (HOTEL), 2 (UNIV), 3 (ZARA1), 4 (ZARA2)
# Supported llm model id: 0 (gpt-3.5-turbo-0301), 1 (gpt-4-0314), 2 (gpt-3.5-turbo-1106), 3 (gpt-4-1106-preview)

# Examples
python ./zero-shot/compute_ade_fde_from_dump.py --dataset 0 --model 3
python ./zero-shot/compute_ade_fde_from_dump.py --dataset 1 --model 3

Results

LMTraj-ZEROETHHOTELUNIVZARA1ZARA2AVG
ADEFDEADEFDEADEFDEADEFDEADEFDEADEFDE
gpt-3.5-turbo-03011.06681.82410.42290.65380.55700.98360.47150.90730.38780.70560.58121.0149
gpt-3.5-turbo-11060.47130.6297
gpt-4-03140.79781.64460.20010.36580.37090.76750.32680.66380.23860.49980.38680.7883
gpt-4-1106-preview0.17570.3279

Evaluate Algorithmic Models

We provide four algorithmic models for comparison in zero-shot trajectory prediction task, available in ./zero-shot/algorithmic_model_benchmark.py. The source code supports four extrapolation methods: stop, linear extrapolation, cubic extrapolation and Kalman filter.

python ./zero-shot/algorithmic_model_benchmark.py --model <MODEL_TYPE>

# Examples
python ./zero-shot/algorithmic_model_benchmark.py --model stop
python ./zero-shot/algorithmic_model_benchmark.py --model linear
python ./zero-shot/algorithmic_model_benchmark.py --model cubic
python ./zero-shot/algorithmic_model_benchmark.py --model kalman

🔥 Supervised Training & Evaluation 🔥

Setup

Environment
All models were tested on Ubuntu 20.04 with Python 3.10 and PyTorch 2.0.1 with CUDA 11.7. Dependencies include Python packages such as transformers, accelerate, nltk and sentencepiece.

Dataset
Preprocessed ETH and UCY datasets are released in this repository. The train/validation/test splits are the same as those fond in Social-GAN.

Preliminary

We provide preprocessed datasets, pretrained tokenizers, and models for training and evaluation. Download these files and extract them into the root folder of the project. This will allow you to skip preprocessing and evaluate our LMTraj-SUP model immediately.

Additionally, we provide instructions for preprocessing and training the data yourself. Follow these steps:

Dataset Preprocessing
To maximize GPU utilization and reduce training time, we preprocess the training data. First, generate text descriptions of the dataset environment using the image captioning model located at ./model/imagemodel.py. This script automatically loads the pretrained model and saves the captions in the ./datasets/image/ folder.

python ./model/imagemodel.py

Next, to preprocess all datasets simultaneously, run the ./script/preprocessor.sh script. This process takes about 2 hours and generates preprocessed JSON files in the ./datasets/preprocessed/ folder.

./script/preprocessor.sh

If you prefer to preprocess the datasets individually, use ./utils/preprocessor.py instead of the script.

python ./utils/preprocessor.py --dataset <DATASET_NAME> --phase <TRAINING_PHASE>

# Supported dataset name: eth, hotel, univ, zara1, zara2
# Supported training phase: train, val, test

# Examples
python ./utils/preprocessor.py --dataset eth --phase train
python ./utils/preprocessor.py --dataset hotel --phase val
python ./utils/preprocessor.py --dataset univ --phase test

Tokenizer Training
Next, train the tokenizer to optimize it for numerical data. You can train the tokenizer yourself using /utils/tokenizer.py. This process requires a system with more than 2TB of RAM and takes approximately 12 hours for each.

python ./utils/tokenizer.py --dataset <DATASET_NAME> --model <TOKENIZER_MODEL> --metric <PIXEL_OR_METER>

# Supported dataset name: eth, hotel, univ, zara1, zara2
# Supported tokenizer model type: char, word, unigram, bpe
# Supported metric type: pixel, meter

# Examples
python ./utils/tokenizer.py --dataset eth --model bpe --metric pixel

Train LMTraj-SUP

To train our LMTrajectory model, you will use ./trainval.py. We leverage the accelerate library to maximize training efficiency. First, configure your system by running accelerate config in the shell. You can find detailed instructions in the Accelerate documentation.

To train the model, use the following command:

accelerate launch trainval.py \
    --cfg ./config/config-pixel.json \
    --dataset eth \
    --tag LMTraj-SUP-eth

If you want to train the LMTraj-SUP model on both the ETH and UCY datasets simultaneously, we provide a bash script:

./script/trainval_all.sh

The training process uses 8x NVIDIA RTX 4090 GPUs at 100% utilization and takes approximately 2 to 4 hours. After training, select the best weight file from the checkpoint epochs.

Evaluate LMTraj-SUP

Finally, to evaluate our LMTrajectory model, use ./trainval.py again with the --test tag. This will perform the evaluation. You can conduct both stochastic and deterministic trajectory predictions using a single pretrained weight file.

For stochastic trajectory prediction, use:

accelerate launch trainval.py \
    --cfg ./config/config-pixel.json \
    --dataset eth \
    --tag LMTraj-SUP-eth \ 
    --test

For deterministic trajectory prediction, use:

accelerate launch trainval.py \
    --cfg ./config/config-pixel-deterministic.json \
    --dataset eth \
    --tag LMTraj-SUP-eth \ 
    --test

To evaluate our LMTraj-SUP model on both the ETH and UCY datasets simultaneously, we provide the following bash scripts for a simplified execution:

./script/eval_all.sh
./script/eval_all_deterministic.sh

Results

LMTraj-SUPETHHOTELUNIVZARA1ZARA2AVG
ADEFDEADEFDEADEFDEADEFDEADEFDEADEFDE
Deterministic w/ image0.65491.03770.26400.45830.57151.15790.51191.00660.38020.74080.47650.8803
Deterministic w/o image0.67241.23880.24980.43310.57231.16120.50901.00180.38270.74710.47720.9164
Stochastic w/ image0.40870.50110.12000.15580.21780.34400.19920.31830.17480.27200.22410.3182
Stochastic w/o image0.41060.61880.12120.15950.21880.34650.20180.32250.17560.27600.22560.3447

📖 Citation

If you find this code useful for your research, please cite our trajectory prediction papers :)

💬 LMTrajectory (CVPR'24) 🗨️ | 1️⃣ SingularTrajectory (CVPR'24) 1️⃣ | 🌌 EigenTrajectory (ICCV'23) 🌌 | 🚩 Graph‑TERN (AAAI'23) 🚩 | 🧑‍🤝‍🧑 GP‑Graph (ECCV'22) 🧑‍🤝‍🧑 | 🎲 NPSN (CVPR'22) 🎲 | 🧶 DMRGCN (AAAI'21) 🧶

@inproceedings{bae2024lmtrajectory,
  title={Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction},
  author={Bae, Inhwan and Lee, Junoh and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}
More Information (Click to expand)
@inproceedings{bae2024singulartrajectory,
  title={SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model},
  author={Bae, Inhwan and Park, Young-Jae and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

@inproceedings{bae2023eigentrajectory,
  title={EigenTrajectory: Low-Rank Descriptors for Multi-Modal Trajectory Forecasting},
  author={Bae, Inhwan and Oh, Jean and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2023}
}

@article{bae2023graphtern,
  title={A Set of Control Points Conditioned Pedestrian Trajectory Prediction},
  author={Bae, Inhwan and Jeon, Hae-Gon},
  journal={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2023}
}

@inproceedings{bae2022gpgraph,
  title={Learning Pedestrian Group Representations for Multi-modal Trajectory Prediction},
  author={Bae, Inhwan and Park, Jin-Hwi and Jeon, Hae-Gon},
  booktitle={Proceedings of the European Conference on Computer Vision},
  year={2022}
}

@inproceedings{bae2022npsn,
  title={Non-Probability Sampling Network for Stochastic Human Trajectory Prediction},
  author={Bae, Inhwan and Park, Jin-Hwi and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

@article{bae2021dmrgcn,
  title={Disentangled Multi-Relational Graph Convolutional Network for Pedestrian Trajectory Prediction},
  author={Bae, Inhwan and Jeon, Hae-Gon},
  journal={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2021}
}