This code runs experiments for a real-time action detection in motion capture data implemented with LSTMs. It reproduces experiments presented in the following paper:
Carrara, F., Elias, P., Sedmidubsky, J., & Zezula, P. (2019).
LSTM-based real-time action detection and prediction in human motion streams.
Multimedia Tools and Applications, 78(19), 27309-27331.
Experiments are conducted on the HDM-05 dataset. NOTE: Few sequences from the HDM05 dataset are partially missing labels. The above videos show two sequences of this kind. The prediction of our model is on top, while the (wrong) groundtruth is on the bottom.
-
Download the proprocessed data archive and extract it in the repo root folder: hdm05-mocap-data.tar.gz (~1GB, the original HDM05 dataset is available here)
-
Run
parse_HDM05_data.sh
to generate data splits. -
See
train_classify.py
andtrain_segment.py
if you want to train single models for classification or segmentation, respectively.To train all the segmentation models of the paper in batch or check out some examples of invocation, see
train_segmentation_models.sh
.