How to define and fine-tune our own task? #19552
-
Hello everybody, I am new at using GLUON for sentence pair classification and I would like to ask for your help. I initially found this very simple tutorial for sentence pair classification - https://nlp.gluon.ai/examples/sentence_embedding/bert.html. However, for simplicity, this tutorial omits important stages and steps such as warmup learning rate schedule and validation on the dev dataset. Would it be possible to explain how to use our own dataset to finetune the Bert model using GLUON? I was going over the finetune_classifier.py file and includes MRPC, QQP, QNLI, RTE, STS-B, CoLA, MNLI, WNLI, SST tasks name to fine-tune, how can I create and fine-tune my own task using the dataset that I have created? Thank you in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
You can refer to the scripts at https://github.com/dmlc/gluon-nlp/tree/v0.10.x/scripts for MXNet 1.x and https://github.com/dmlc/gluon-nlp/tree/master/scripts for the MXNet 2 development version (master branch). |
Beta Was this translation helpful? Give feedback.
You can refer to the scripts at https://github.com/dmlc/gluon-nlp/tree/v0.10.x/scripts for MXNet 1.x and https://github.com/dmlc/gluon-nlp/tree/master/scripts for the MXNet 2 development version (master branch).