You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to this pytorch quantization guide, pytorch now supports a faster backend for int8 called X86 as a replace for FBGEMM. I find that in the tch-rs jit-quantized example tch-rs only supports either FBGEMM or QNNPACK.
I wonder if X86 will be supported soon, or is there a workaround so that tchrs can use X86 backend?
The text was updated successfully, but these errors were encountered:
According to this pytorch quantization guide, pytorch now supports a faster backend for int8 called X86 as a replace for FBGEMM. I find that in the tch-rs jit-quantized example tch-rs only supports either FBGEMM or QNNPACK.
I wonder if X86 will be supported soon, or is there a workaround so that tchrs can use X86 backend?
The text was updated successfully, but these errors were encountered: