In QAT (Quantization-Aware Training), why does the model fail to converge if a pre-trained model is not loaded? #2970
Unanswered
huangqiu15444
asked this question in
Q&A
Replies: 1 comment
-
Hello @huangqiu15444, The best practice is to initialize the model with pre-trained weights, because:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In QAT (Quantization-Aware Training), why does the model fail to converge if a pre-trained model is not loaded?
Beta Was this translation helpful? Give feedback.
All reactions