We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
np.float64(...)
The output of mixtral-8x7b/evaluate-accuracy.py:
xx.xx% pass@1 {'typescript': x, 'ruby': x, 'python': x, 'javascript': x, 'php': x, 'cpp': x} {'typescript': x, 'ruby': x, 'python': x, 'javascript': x, 'php': x, 'cpp': x} Results { 'rouge1': np.float64(x), 'rouge2': np.float64(x), 'rougeL': np.float64(x), 'rougeLsum': np.float64(x), 'gsm8k': x, 'mbxp': x, 'gen_len': np.int64(x), 'gen_num': x, 'gen_tok_len': x, 'tokens_per_sample': x }
conatins np.float64(...) text wrapped around fp value for given fields:
contains np.int64 text around long value for given fields:
np.int64
this fails the regexes we have defined:
inference/tools/submission/submission_checker.py
Lines 533 to 537 in 9e2c9f6
im not entirely sure why this behaves differently from our llama script:
inference/language/llama2-70b/evaluate-accuracy.py
Line 95 in 29edbb0
Verified Fix:
float(round(...))
inference/language/mixtral-8x7b/evaluate-accuracy.py
Line 180 in 29edbb0
int(np.sum(...))
Line 215 in 29edbb0
The text was updated successfully, but these errors were encountered:
fix: remove np type around output values
d1bfc0b
fixes mlcommons#1763
fix: remove np type around output values (#1764)
4a77003
fixes #1763
Successfully merging a pull request may close this issue.
The output of mixtral-8x7b/evaluate-accuracy.py:
conatins
np.float64(...)
text wrapped around fp value for given fields:contains
np.int64
text around long value for given fields:this fails the regexes we have defined:
inference/tools/submission/submission_checker.py
Lines 533 to 537 in 9e2c9f6
im not entirely sure why this behaves differently from our llama script:
inference/language/llama2-70b/evaluate-accuracy.py
Line 95 in 29edbb0
Verified Fix:
float(round(...))
inference/language/mixtral-8x7b/evaluate-accuracy.py
Line 180 in 29edbb0
int(np.sum(...))
inference/language/mixtral-8x7b/evaluate-accuracy.py
Line 215 in 29edbb0
The text was updated successfully, but these errors were encountered: