Replies: 1 comment
-
Hi! I have been working with ORT's quantization functionality. It has been super useful. I would like to try it out on models larger than 2gb but I have seen the warning |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Please add your question or comment here.
If you are reporting a bug please create an issue
Beta Was this translation helpful? Give feedback.
All reactions