You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I am using AIMET for QAT, but when I use fold_all_batch_norms, I find that there is a lot of loss before and after the fold.
I tried using . /Examples/torch/quantization/qat.ipynb when I tried it, I found that fold_all_batch_norms also had differences, but the results were acceptable.
Both models are constructed with connv , BatchNorm, ReLU, and residual networks. I would like to ask if there is a guide to use the matching of the operators. Because I suspect that there might be a problem with the pairing somewhere that is causing the problem.
The text was updated successfully, but these errors were encountered:
Hello,
I am using AIMET for QAT, but when I use
fold_all_batch_norms
, I find that there is a lot of loss before and after the fold.I tried using
. /Examples/torch/quantization/qat.ipynb
when I tried it, I found thatfold_all_batch_norms
also had differences, but the results were acceptable.Both models are constructed with connv , BatchNorm, ReLU, and residual networks. I would like to ask if there is a guide to use the matching of the operators. Because I suspect that there might be a problem with the pairing somewhere that is causing the problem.
The text was updated successfully, but these errors were encountered: