Skip to content

Official PyTorch code for extracting features and training downstream models with emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation

Notifications You must be signed in to change notification settings

Jiaxin-Ye/emotion2vec

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EMOTION2VEC

Official PyTorch code for extracting features and training downstream models with
emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation

emotion2vec Logo

(Logo generated by DALL·E 3)

version version python mit

Guides

emotion2vec is the first universal speech emotion representation model. Through self-supervised pre-training, emotion2vec has the ability to extract emotion representation across different tasks, languages, and scenarios.

Model

The paper is coming soon.

Performance

Performance on IEMOCAP

emotion2vec achieves SOTA with only linear layers on the mainstream IEMOCAP dataset.

Performance on other languages

emotion2vec achieves SOTA compared with SOTA SSL models on multiple languages (Mandarin, French, German, Italian, etc.). Refer to the paper for more details.

Performance on other speech emotion tasks

Refer to the paper for more details.

Extract features

Download extracted features

We provide the extracted features of popular emotion dataset IEMOCAP. The features are extracted from the last layer of emotion2vec. The features are stored in .npy format and the sample rate of the extracted features is 50Hz. The utterance-level features are computed by averaging the frame-level features.

All wav files are extracted from the original dataset for diverse downstream tasks. If want to train with standard 5531 utterances for 4 emotions classification, please refer to iemocap_downstream.

Extract features from your dataset

The minimum environment requirements are python>=3.8 and torch>=1.13. Our testing environments are python=3.8 and torch=2.01.

  1. git clone repos.
pip install fairseq
git clone https://github.com/ddlBoJack/emotion2vec.git
  1. download emotion2vec checkpoint from:
  1. modify and run scripts/extract_features.sh

Training your downstream model

We provide training scripts for IEMOCAP dataset in iemocap_downstream. You can modify the scripts to train your downstream model on other datasets.

About

Official PyTorch code for extracting features and training downstream models with emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.5%
  • Shell 2.5%