Gaurvi Goyal1 Pham Cong Thuong1 Arren Glover1 Masayoshi Mizuno2 Chiara Bartolozzi1
1Istituto Italiano di Tecnologia 2Sony Interactive Entertainment Inc.
This repository is the official implementation of the paper "GraphEnet: Event-driven Human Pose Estimation with a Graph Neural Network", which was presented at the 2nd Workshop on Neuromorphic Vision: Advantages and Applications of Event Cameras (NeVi2025), International Conference on Computer Vision (ICCV).
To use this setup,
- copy the data into
<<dataDirectory>>/raw. - Open main.py and change the
data_pathvariable to<<dataDirectory>>. - Run main.py
To install PyG, follow the instructionfrom here.
For the setup, please install the following libraries:
torch pytorch-lightning torch_geometric matplotlib opencv_python hpe_core albumentations torchvision tensorboard torch-cluster
- Please download the eH3.6M and and DHP19 dataset link and organise the downloaded files as follows:
/data
├── datasets
│ ├── eH36M
│ │ ├── EV2 # Contains Ground Truth data
│ │ ├── ledge
│ │ │ ├── raw # Contains validation data
│ ├── dhp19
│ │ ├── EV2 # Contains Ground Truth data
│ │ ├── ledge
│ │ │ ├── raw # Contains validation data
- To train the GraphEnet model on eh3.6m and dhp19, run the following command:
python3 main.py --node_loss_weight 0.001 --epoch 50 --data_path /data/datasets/dhp19/ledge/ --exp two_weights --dataset dhp19
- To evaluate a single checkpoint, run the following command with --ckpt to specify the checkpoint to be evaluated:
python3 predict.py --data_path /data/datasets/xxxx/ledge/ --ckpt_path path/to/checkpoints/model
Two models are available, one per dataset, in the ckpts folder of the repository.
python3 predict.py --data_path /data/datasets/dhp19/ledge/ --ckpt_path ckpts/single_weight_dhp19.ckpts --arch single_weight --dataset dhp19 --visualise pose