[Feature Request] Native WebGPU Execution Provider #22077
Labels
ep:WebGPU
ort-web webgpu provider
feature request
request for unsupported feature or enhancement
platform:web
issues related to ONNX Runtime web; typically submitted using template
Describe the feature request
Request:
Leverage
onnxruntime-web
kernels to create a native WebGPU Execution Provider for non-web environments.Story:
I am in a unique situation where my device supports Vulkan, but lacks support for ROCm and CUDA. In the related issue #21917, it seems that Vulkan support was requested, but the discussion appears to have stalled.
Given the progress I've seen with ONNX Runtime in the web environment, I was wondering if the development efforts on the web could be extended to implement a native C++ execution provider. A potential way to achieve this would be by using a library such as wgpu, or more specifically, wgpu-native, which would align well with ONNX Runtime's C++ codebase.
Describe scenario use case
GPUs with no support for ROCm or CUDA, such as older or lower-end GPUs, are currently unable to fully leverage ONNX Runtime's GPU acceleration on Linux. While Windows users have the option to utilize DirectML for GPU support, there is no equivalent solution available for Linux users in this category. These GPUs, while not capable of running ROCm or CUDA, often have Vulkan support, making them suitable candidates for a WebGPU-based execution provider. A native WebGPU Execution Provider would enable efficient ONNX model execution on these devices, particularly in Linux environments, greatly expanding compatibility across platforms without requiring specialized GPU hardware.
The text was updated successfully, but these errors were encountered: