Skip to content

Additional positional embeddings #296

@bonham79

Description

@bonham79

@Adamits I'll implement but want to check with you if it's worthwhile since my domain is speech:

What are your thoughts on adding in new positional embeddings to Transformer models (particularly RoPE)? iirc we're using the standard cosine ones but they're a bit old fashioned nowadays. Know if there's arguments for or against?

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions