You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Should we use data augmentations/ mixup for finetuning on pseudolabels? I think data augs should be significantly less aggressive for the pseudolabeling. However, do we want that to mean a different set of augs, weaker augs, no augs, or something else entirely?
The text was updated successfully, but these errors were encountered:
For generating pseudo-labels, I think we should just run it without augs to get an accurate confidence value. For training on it, I think we should always data aug on stuff we're training on.
Should we use data augmentations/ mixup for finetuning on pseudolabels? I think data augs should be significantly less aggressive for the pseudolabeling. However, do we want that to mean a different set of augs, weaker augs, no augs, or something else entirely?
The text was updated successfully, but these errors were encountered: