Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use of relative attention #27

Open
florianleopold opened this issue Mar 1, 2023 · 1 comment
Open

Use of relative attention #27

florianleopold opened this issue Mar 1, 2023 · 1 comment
Labels
question Further information is requested

Comments

@florianleopold
Copy link

Hey all!

While looking through the VPT code I noticed the use of "relative attention logits" in the Self-Attention layers, as seen here:
https://github.com/openai/Video-Pre-Training/blob/main/lib/xf.py#L342

The hypothesis on my end regarding these now was that this R-stream is used as a learnable, data-dependent bias for attention, as also seen in the attention function and the b_nd matrix.
I was also wondering about the use of nbasis = 10 as the per-head dimensionality for it, and thought of it as a form of bottleneck. But I am not sure how different values for nbasis would affect the network.

I would really appreciate any further insights, corrections and references to other resources regarding this.

@Miffyli
Copy link
Collaborator

Miffyli commented Mar 6, 2023

Hey. Good catches! I unfortunately do not have insights on why they exactly went with this approach. Your best bet is to try to send email to the paper corresponding author(s), or try asking this question on some other forums where someone might know why you would do this

@Miffyli Miffyli added the question Further information is requested label Mar 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants