Skip to content

[ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https://arxiv.org/abs/2201.12114).

License

Notifications You must be signed in to change notification settings

BierOne/Attention-Faithfulness

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[ICML-2022] Rethinking Attention-Model Explainability through Faithfulness Violation Test

Please see the detailed running steps in subfolders.

Citation

If you make use of our work, please cite our paper:

@InProceedings{pmlr-v162-liu22i,
  title = 	 {Rethinking Attention-Model Explainability through Faithfulness Violation Test},
  author =       {Liu, Yibing and Li, Haoliang and Guo, Yangyang and Kong, Chenqi and Li, Jing and Wang, Shiqi},
  booktitle = 	 {Proceedings of the 39th International Conference on Machine Learning},
  pages = 	 {13807--13824},
  year = 	 {2022},
  publisher =    {PMLR},
}

Credits

This work is built based on the implementation of 2021-ICCV work Generic Attention-Model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transform, and 2020-ACL work Towards Transparent and Explainable Attention Models. Many thanks for the generous sharing.

About

[ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https://arxiv.org/abs/2201.12114).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published