Replies: 1 comment
-
Add @tianleiwu. @liehtman, are you using a official build or from master? Could you provide more information? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey guys. I'm trying to convert pretrained gpt2 model to onnx format. The coversion gones well via script
onnxruntime_tools/transformers/convert_to_onnx.py
. If I use-p fp32
while conversion, the inference is fine. But if i change it to fp16, model starts to output non-sense like it would be if I use wrong tokenizer. My GPU is Tesla V100 16GB. What might be a problem?Beta Was this translation helpful? Give feedback.
All reactions