Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of ROBERT on top of Transformers and Flux #15

Open
tejasvaidhyadev opened this issue Aug 26, 2020 · 0 comments
Open

Implementation of ROBERT on top of Transformers and Flux #15

tejasvaidhyadev opened this issue Aug 26, 2020 · 0 comments

Comments

@tejasvaidhyadev
Copy link
Member

  • This implementation is the same as Transformers.Bert with a tiny embeddings tweaks.
  • RoBERTa has the same architecture as BERT, but uses a byte-level BPE(implemented in BPE.jl) as a tokenizer (same as GPT-2) and uses a different pre-training scheme.
  • RoBERTa doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation (or </s>)

we can also wrapper Camembert (or the french version of BERT) around RoBERT.

@aviks aviks transferred this issue from JuliaText/TextAnalysis.jl Nov 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant