You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running generate_until requests: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 404/404 [04:54<00:00, 1.37it/s]
[rank1]: File "<frozen runpy>", line 198, in _run_module_as_main
[rank1]: File "<frozen runpy>", line 88, in _run_code
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/__main__.py", line 461, in <module>
[rank1]: cli_evaluate()
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/__main__.py", line 382, in cli_evaluate
[rank1]: results = evaluator.simple_evaluate(
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/utils.py", line 397, in _wrapper
[rank1]: return fn(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/evaluator.py", line 303, in simple_evaluate
[rank1]: results = evaluate(
[rank1]: ^^^^^^^^^
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/utils.py", line 397, in _wrapper
[rank1]: return fn(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/evaluator.py", line 522, in evaluate
[rank1]: task.apply_filters()
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/api/task.py", line 1135, in apply_filters
[rank1]: f.apply(self._instances)
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/api/filter.py", line 51, in apply
[rank1]: resps = f().apply(resps, docs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/filters/extraction.py", line 48, in apply
[rank1]: filtered_resps = list(map(lambda x: filter_set(x), resps))
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/filters/extraction.py", line 48, in <lambda>
[rank1]: filtered_resps = list(map(lambda x: filter_set(x), resps))
[rank1]: ^^^^^^^^^^^^^
[rank1]: File "/home/hidelord/lm-evaluation-harness/lm_eval/filters/extraction.py", line 40, in filter_set
[rank1]: match = [m for m in match if m][0]
[rank1]: ~~~~~~~~~~~~~~~~~~~~~~~^^^
[rank1]: IndexError: list index out of range
I tried running some CoT zeroshot evaluations, but they both failed. Am I doing something wrong?
Command for mmlu_flan_cot_zeroshot
Error
Command for bbh_cot_zeroshot
Error
The text was updated successfully, but these errors were encountered: