-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
phi3.5 genai converted model output garbage results with input length around 3000 and 8000. #954
Comments
Thanks for reporting this. Does it matter which prompt you use? Or any long prompt is producing this output? |
It seems only related to length of the prompt, i got several ones around 3000 length have this issue, like 3824, 3613, 3918... And i have also some samples are correct with 4000 and 5000 length. |
Thank you. Can you share the prompts that produce garbage? The 3000 length and the 8000 length, so that we can repro. |
Sorry, i cant provide the prompt, because its customer data. |
No problem. I did reproduce garbage output for a prompt length of 3402. We are investigating. |
We are investigating a fix for this issue |
any updates ? Thanks |
sorry haven't made much progress on this till now, will prioritize it this week. Thanks. |
Hi @ajindal1 , |
Hi @ajindal1 , |
Hi @yufang67, |
Adding this fix into GenAI huggingface/transformers#33129, it should resolve the issue. |
Great, thanks. How can i use this latest fix ? Or there will have a new release soon. |
Hi @ajindal1 , |
Sorry the fix is not yet available, we are working on the fix and will be part of next release (0.6.0) or you can use the main branch (build from source) for the fix once it is added. I will update once it is complete. |
Describe the bug
Currently, i use onnxgenai==0.4.0 converted phi_3_5_mini_instruct (fp16 and cuda) and run the infer with onnxgenai on A100 80G.
I observed for some input length around 3000 (8000), i got result length up to the fixed max_length and the results are full of "\n" .
for example, i fixed the max_length is 12K, if the input is 3424 and the output gives 8576 and the output is filled with followings:
n0.\n.\n0.\n.\n.\n0.\n\n\n\n\n\n2.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.\n.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.\n2.\n\n\n\n\n2.\n2.\n\n\n.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.\n\n\n\n\n\n\n.\n\n\n\n\n\n\n\n\n\n\n.\n\n\n\n\n\n.\n0.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n2.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n0.
I compared with the transformer API, i didn't get this kind of results with same model.
Any clue of this issue ? (I have seen for vLLM/ transfomers, there exists an issue, https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/discussions/85, vllm-project/vllm#8254 )
Thanks
The text was updated successfully, but these errors were encountered: