Skip to content

Starting code for the course project of Advanced Machine Learning in Multimodal Egocentric Vision

Notifications You must be signed in to change notification settings

Peipi98/aml22-ego

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

75 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project Code for AML 2022 @ Politecnico di Torino

Team members

To run the instructions, be careful to set the correct configs for the specific configuration you want to run.

Feature Extraction

1. Extract EK-RGB features

python save_feat.py config=configs/I3D_save_feat.yaml dataset.shift=D1-D1 name=save_feat_I3D_EK

2. Resampling EMG for LSTM

python EMG/EMG_preprocessing.py

3. Extract ActionSense RGB+EMG features

python save_feat_action-net.py config=configs/I3D_save_feat.yaml dataset.shift=D1-S04 name=save_feat_I3D_AS

Training

1. Fully Connected Classifier

python train_classifier.py name=classifierD1 dataset.shift=D1-D1

using Classifier2 on model inside configs/default.yaml

2. TRN Classifier

python train_TRN.py name=classifierD1 dataset.shift=D1-D1

using TRNClassifier on model inside configs/default.yaml

3. EMG-LSTM Classifier

python EMG/EMG_train.py

4. EMG-CNN Classifier

python EMG_CNN.py

5. Multimodal Classifier

python train_multimodal.py name=classifierS04 dataset.shift=S04-S04

using configs/multi_modalities.yaml

About

Starting code for the course project of Advanced Machine Learning in Multimodal Egocentric Vision

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.7%
  • Jupyter Notebook 2.7%
  • Shell 0.6%