You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the decoupling of encoders and decoders, we have added a Linear encoder, which seems to just embed the inputs and pass them along. We should also add a SelfAttention encoder, which encodes the embeddings with a self attention layer (and no positional encoding).
This contextualizes the embeddings by representing each as a linear combination of itself wrt all other embeddings.
The text was updated successfully, but these errors were encountered:
With the decoupling of encoders and decoders, we have added a
Linear
encoder, which seems to just embed the inputs and pass them along. We should also add aSelfAttention
encoder, which encodes the embeddings with a self attention layer (and no positional encoding).This contextualizes the embeddings by representing each as a linear combination of itself wrt all other embeddings.
The text was updated successfully, but these errors were encountered: