- 수정 및 업데이트 요청 : 김도은([email protected]), 정환석([email protected]), 진명훈([email protected])
- 중급반 스케쥴: 구글 시트
- 01: Google's Neural Machine Translation
- Video, Presentation
- Keywords: NMT, Seq2seq, Wordpiece Model, Parallelism
- Team: 진명훈(발표), 송지현, 지우석
- 02: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- Paper, Video, Slides, Demo Code(tokenizer & mlm)
- Keywords: BERT, trasnformer, MLM, NSP, Pre-training, Fine-tuning
- Team: 김유빈, 이승미, 이정섭 - 다같이 발표 :)
- 03: Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
- 04: Task-oriented Dialogue System에 대한 넓고 얕은 지식들
- 05: Cross-lingual Language Model Pretraining
- 06: Semi-Supervised Sequence Modeling With Cross-View Training
- 07: What You Can Cram Into a Single $&!#* Vector: Probing Sentence Embeddings For Linguistc Properties
- 08: Improving Language Understanding by Generative Pre-Training & Language Models are Unsupervised Multitask Learners
- 09: A Persona-Based Neural Conversation Model
- 10: ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
- 11: Neural document summarization by jointly learning to score and select sentences
- 12: Adversarial Examples for Evaluating Reading Comprehension Systems
- Paper, Video, [Slides](https://github.com/jiphyeonjeon/season2/blob/main/intermediate/presentations/집현전_중급_12조_Adversarial Examples.pdf)
- Keywords: AdversarialSentences, AdversarialExamples, Adversarialevaluation, SQuAD, BiDAF, Match-LSTM
- Team: 박병학(발표), 김주찬, 이창호
- 14: Unsupervised Machine Translation Using Monolingual Corpora Only
- 15: RoBERTa: A Robustly Optimized BERT Pretraining Approach
- 16: A Survey on Deep Learning for Named Entity Recognition
- 18: Graph Attention Networks(GAT)
- 19: Universal Sentence Encoder
- 20: Text Summarization with Pretrained Encoders