You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all
It seems like for sequence tagging tasks like WikiANN, the metrics are computed on truncated sequences (upto max sequence length). A consequence of that would be that for the same model, the metrics would change with changing max_seq_len in a way that may not be indicative of model quality (eg: by changing max_seq_len to 256, for the same exact model, we might see different results).
One potential fix would be for test eval to always be upto the sequence length supported by the model (eg: 512 for mBERT / XLM-Roberta); and for documents with larger sequences, might consider predictions for all other tokens as "O" (or use a windowed prediction mechanism, but that might be too involved).
The text was updated successfully, but these errors were encountered:
Hi all
It seems like for sequence tagging tasks like WikiANN, the metrics are computed on truncated sequences (upto max sequence length). A consequence of that would be that for the same model, the metrics would change with changing max_seq_len in a way that may not be indicative of model quality (eg: by changing max_seq_len to 256, for the same exact model, we might see different results).
One potential fix would be for test eval to always be upto the sequence length supported by the model (eg: 512 for mBERT / XLM-Roberta); and for documents with larger sequences, might consider predictions for all other tokens as "O" (or use a windowed prediction mechanism, but that might be too involved).
The text was updated successfully, but these errors were encountered: