Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model remains float32 type after quantization #26

Open
Blinkblade opened this issue May 17, 2023 · 1 comment
Open

Model remains float32 type after quantization #26

Blinkblade opened this issue May 17, 2023 · 1 comment

Comments

@Blinkblade
Copy link

Hello, thank you for providing the open-source code. I encountered some issues while trying to reproduce.
I set up the environment according to your instructions and run the code. However, I found that my model remains float32 type data after quantization (I take weight_bit = 8 and activation_bit = 8). I am not sure where the problem is and would appreciate your help.
Thank you again for providing the code, and I look forward to your reply.

@lzd19981105
Copy link

Maybe because this method is a Simulated Quantization, that means using float32 to calculate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants