High-performance PyTorch operators for Apple Silicon (M1/M2/M3/M4).
| Package | Description | Install |
|---|---|---|
| mps-flash-attn | Flash Attention with O(N) memory | pip install mps-flash-attn |
| mps-bitsandbytes | 8-bit quantization (INT8/FP8) | pip install mps-bitsandbytes |
| mps-deform-conv | Deformable Convolution 2D | pip install mps-deform-conv |
| mps-conv3d | 3D Convolution | pip install mps-conv3d |
| mps-carafe | CARAFE content-aware upsampling | pip install mps-carafe |
| mps-correlation | Correlation layer for optical flow | pip install mps-correlation |
Or install all at once:
pip install mpsopsPyTorch's MPS backend lacks many optimized operators that CUDA has. We bridge that gap with native Metal implementations, enabling models that would otherwise fail on Apple Silicon.