Replies: 2 comments
-
This was not implemented, mostly because it is impossible to restore the original weights once they were quantized. Any particular reason you want to do this? |
Beta Was this translation helpful? Give feedback.
0 replies
-
I want to improve the communication, because a quantized model is smaller for transmitting, but later or in next learning rounds (in Federated Learning) I want to continue learning. In this paper (https://arxiv.org/abs/2312.15186), the quantization was used as a compression technique, I think, it is a good idea |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In the Federated Learning settings I want to use the quantization techniques to improve communication efficiency. I was able to quantize a YOLO model to UInt8. But I did'n find how to dequantize the YOLO model back to float32. Is it possible with onnx and when yes, then how? Thanks in advance
Beta Was this translation helpful? Give feedback.
All reactions