Skip to content
@mpsops

mpsops

MPS Ops

High-performance PyTorch operators for Apple Silicon (M1/M2/M3/M4).

Packages

Package Description Install
mps-flash-attn Flash Attention with O(N) memory pip install mps-flash-attn
mps-bitsandbytes 8-bit quantization (INT8/FP8) pip install mps-bitsandbytes
mps-deform-conv Deformable Convolution 2D pip install mps-deform-conv
mps-conv3d 3D Convolution pip install mps-conv3d
mps-carafe CARAFE content-aware upsampling pip install mps-carafe
mps-correlation Correlation layer for optical flow pip install mps-correlation

Or install all at once:

pip install mpsops

Why?

PyTorch's MPS backend lacks many optimized operators that CUDA has. We bridge that gap with native Metal implementations, enabling models that would otherwise fail on Apple Silicon.

Popular repositories Loading

  1. mps-conv3d mps-conv3d Public

    3D Convolution for Apple Silicon (MPS)

    Objective-C++ 2

  2. mps-bitsandbytes mps-bitsandbytes Public

    8-bit quantization for PyTorch on Apple Silicon (M1/M2/M3/M4)

    Python 1

  3. mps-deform-conv mps-deform-conv Public

    Deformable Convolution 2D for PyTorch on Apple Silicon (MPS)

    Objective-C++ 1

  4. mps-flash-attention mps-flash-attention Public

    Python

  5. mps-correlation mps-correlation Public

    Correlation layer for optical flow on Apple Silicon (MPS)

    Objective-C++

  6. mps-carafe mps-carafe Public

    CARAFE content-aware upsampling for Apple Silicon (MPS)

    Objective-C++

Repositories

Showing 9 of 9 repositories

Most used topics

Loading…