Skip to content

No compression_loss needed for QAT? #3024

Answered by alexsu52
EmonLu asked this question in Q&A
Discussion options

You must be logged in to vote

Hello @EmonLu,

At the first, I would lke to note that create_compressed_model(...) API will be deprecated in the next release and nncf.quantize is recommended API for QAT.

Why there is no compression_loss in QAT docs? How does quantization update the parameters like scalefactor and zeropoint?

QAT uses the model loss that is used for training and does not require additional compression losses.

Is PTQ indispensable in QAT?

PTQ provides good initialization for QAT to have a better performance.

Are there any tips for QAT to have a better performance? e.g. use PTInitialDataloader to initialize

PTInitialDataloader will be deprecated in the next release as well. Please take a look at Jupyt…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by EmonLu
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants