Skip to content

Audio Spectrogram Transformer with LoRA adapter.

Notifications You must be signed in to change notification settings

aryamtos/ast-brazilian-LoRA

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Audio Spectrogram Transformer with LoRA

-Introduction

-Getting Starting

-Citation

-Contact

Introduction

This repository hosts an implementation of the Audio Spectrogram Transformer (AST) based on the official implementation provided by the authors of the paper "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers" -- PETL_AST. The paper introduces Parameter-Efficient Transfer Learning (PETL) methods tailored to the Audio Spectrogram Transformer architecture. In this repository, particular emphasis is placed on the LoRA (Low Rank) adapter, which has demonstrated the best results.

After all, what is LoRA?

Low Rank Adaption is the most widely used PETL methods. Provides the approach to fine-tuning a model for downstream tasks and the capability of transfer learning with fewer resources.

Context

-- Neural networks use dense layers with weight matrices for computation --These weight matrices are typically "full-rank"(use all dimensions)

Solution

-- The pre-trained models have a low "intrisic dimension" meaning they might not need full-rank weight matrices.
-- The pre-trained parameters of the original model (W) are frozen. These weights (W) will not be modified
-- A new set of parameters is added to the network WA and WB ( low-rank weight vectors) where the dimensions are represented as dxr and rxd ( r - low rank dimension; d - original dimension)

Module AST

Getting Starting

Step 1. Clone or download this repository and set it as the working directory, create a virtual environment and install the dependencies.

cd ast-lora/ 
python3 -m venv venvast
source venvast/bin/activate
pip install -r requirements.txt 

Step 2: Running Experiment

We just need set some parameters in train.yaml:

  • ``` lr_LoRA ```
  • ``` weight_decay ```
  • ``` final_output ```
  • ``` patch_size ```
  • ``` hidden_size ```

In main.sh just need set data path ( train, validation and test ) and some parameters.

bash  main.sh

Citation

Citing the original paper:

@misc{cappellazzo2024efficient,
      title={Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters}, 
      author={Umberto Cappellazzo and Daniele Falavigna and Alessio Brutti},
      year={2024},
      eprint={2402.00828},
      archivePrefix={arXiv},
      primaryClass={eess.AS}
}

Contact

If you have a question, or just want to share how you have use this, send me an email at [email protected]

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.8%
  • Shell 2.2%