Skip to content

Latest commit

 

History

History
8 lines (6 loc) · 452 Bytes

README.md

File metadata and controls

8 lines (6 loc) · 452 Bytes

attention

implement some attention by pytorch base on Q,K,V from the paper "attention is all your need"

学习的过程中没有找到一个比较模板化的attention实现加上一些派生的attention用法, 于是实现了基于 "attention is all your need" 谷歌这篇论文提出的Q,K,V的attention模板,并且打算后续加上一些学习到的attention用法