- 😊 init
-
mkdir best_model predicts -
mkdir ./data/ED/comet -
download comet checkpoint to comet directory.
-
wget http://nlp.stanford.edu/data/glove.6B.zipto./vectors.
-
result:
- 🔥 code params explanation
| params | instruction |
|---|---|
| model | model type available contains trans,mult,empdg,mime,moel,kemp,cem,emf |
| code_check | store_strue for fast check program runnable in ur machine |
| devices | value passed to os['CUDA_VISIBLE_DEVICE'] |
| mode | train_only,train_and_test,test_only indicates run partial or whole process |
| max_epoch | max epochs to train model |
| emotion_emb_type | origin,coarse,contrastive indicates different emotion embedings,details see paper |
| batch_size | batch size for train,valid,test |
-
🐶 run experiment
nohup python train.py --model trans --mode train_and_test --batch_size 32 --max_epoch 128 --devices 0 >trans.log&trans trainingnohup python train.py --model mult --mode train_and_test --batch_size 32 --max_epoch 128 --devices 1 >mult.log&mult trainingnohup python train.py --model empdg --mode train_and_test --batch_size 32 --max_epoch 128 --devices 2 >empdg.log&empdg trainingnohup python train.py --model mime --mode train_and_test --batch_size 32 --max_epoch 128 --devices 3 >mime.log&mime trainingnohup python train.py --model moel --mode train_and_test --batch_size 32 --max_epoch 128 --devices 4 >moel.log&moel trainingnohup python train.py --model cem --mode train_and_test --batch_size 32 --max_epoch 128 --devices 5 >cem.log&cem trainingnohup python train.py --model kemp --mode train_and_test --batch_size 32 --max_epoch 128 --devices 6 >kemp.log&kemp training
-
🔎 todo list
- run all models
- add more params to control
- run on other datasets

