We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We need to run benchmarking on the BEIR MSMARCO dataset, to have a better understanding of how the models are performing for retrieval tasks.
We can use the test split available on Hugging Face hub:
test
QRels Corpus
Proposed metrics:
Considering non-judged documents as non-relevant.
The text was updated successfully, but these errors were encountered:
The InformationRetrievalEvaluator from SentenceTransformers can be helpful for this.
InformationRetrievalEvaluator
Sorry, something went wrong.
No branches or pull requests
We need to run benchmarking on the BEIR MSMARCO dataset, to have a better understanding of how the models are performing for retrieval tasks.
We can use the
test
split available on Hugging Face hub:QRels
Corpus
Proposed metrics:
Considering non-judged documents as non-relevant.
The text was updated successfully, but these errors were encountered: