Replies: 2 comments 1 reply
-
Could you have a look? I think it should be very easy to support it. (We have supported it in sherpa-onnx. All we need to do is to convert the whisper turbo to onnx.) |
Beta Was this translation helpful? Give feedback.
0 replies
-
@janekpi https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/whisper/convert_checkpoint.py#L38 You could follow here. Nothing need to change except model name. We would also update it into sherpa rencently. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi guys!
First of all thanks for such a great tools to work with whispers. I just wanted to ask if are you going to integrate whisper turbo into tensorrt and make it compatible with triton server?
Beta Was this translation helpful? Give feedback.
All reactions