-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with mteb_meta for german evaluation #847
Comments
Due to recent updates #826 and #806: For the WARNING message, if you're using a python script then your code should look like this: import mteb
from sentence_transformers import SentenceTransformer
# Define the sentence-transformers model name
model_name = "average_word_embeddings_komninos"
# or directly from huggingface:
# model_name = "sentence-transformers/all-MiniLM-L6-v2"
model = SentenceTransformer(model_name)
tasks = mteb.get_tasks(tasks=["Banking77Classification"])
evaluation = mteb.MTEB(tasks=tasks)
results = evaluation.run(model, output_folder=f"results/{model_name}") |
Else for these 2 messages:
It's likely a bug on our side, we'll check, thank you for reporting! |
@imenelydiaker I believe it is due to the new results format introduced in #759. mteb_meta.py will need to be rewritten for the new format. We should probably make it a CLI with a test (otherwise it is impossible to know if it breaks). |
@achibb, we have currently updated the CLI and well as the benchmark lists. I believe the new CLI should suit your purpose |
Thank you very much! Will test the next days and feedback.
I was wondering can I also compute something for the German benchmark for other models like mdeberta, and somehow add it to the leaderboard ?
Gesendet von Outlook für iOS<https://aka.ms/o0ukef>
…________________________________
Von: Kenneth Enevoldsen ***@***.***>
Gesendet: Tuesday, June 11, 2024 1:33:36 PM
An: embeddings-benchmark/mteb ***@***.***>
Cc: achibb ***@***.***>; Mention ***@***.***>
Betreff: Re: [embeddings-benchmark/mteb] Problems with mteb_meta for german evaluation (Issue #847)
@achibb<https://github.com/achibb>, we have currently updated the CLI and well as the benchmark lists. I believe the new CLI should suit your purpose
—
Reply to this email directly, view it on GitHub<#847 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AKBF2KRCQFGA6NY7WYMTW73ZG3OBBAVCNFSM6AAAAABIPBGHUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRQGUZDKMRRGA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Yes you can evaluate any model and submit results to this repo via PR so they can be added to the leaderboard (check the guide on opening a PR on HF here). |
Hi everyone, I am having troubles with generating the mteb_meta for German, with just running the script.
I am currently trying to format results but it does not seem to work straight away with "mteb_meta.py" - any idea? I just get a blank metadata file:
tags:
mteb
model-index:
name: gbert-large
results:
it gives this for every dataset:
Do I need to modify something on the code?
The text was updated successfully, but these errors were encountered: