You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I often start out with much more unlabelled than labelled data. Is it possible to do masked language model fine-tuning (without the classification head) to start with on the full set of data before adding the classifier?
If not, would a second best approach be to do it iteratively i.e. train on the small amount of labelled data, predict for the unlabelled data, fine tune on the labels & predictions and then re-train just on the labelled data?
The text was updated successfully, but these errors were encountered:
Nice work!
I often start out with much more unlabelled than labelled data. Is it possible to do masked language model fine-tuning (without the classification head) to start with on the full set of data before adding the classifier?
If not, would a second best approach be to do it iteratively i.e. train on the small amount of labelled data, predict for the unlabelled data, fine tune on the labels & predictions and then re-train just on the labelled data?
The text was updated successfully, but these errors were encountered: