File tree 1 file changed +13
-0
lines changed
1 file changed +13
-0
lines changed Original file line number Diff line number Diff line change @@ -21,6 +21,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
21
21
- Runs on the CPU
22
22
- [ Partial GPU support for NVIDIA via cuBLAS] ( https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas )
23
23
- [ Partial OpenCL GPU support via CLBlast] ( https://github.com/ggerganov/whisper.cpp#opencl-gpu-support-via-clblast )
24
+ - [ BLAS CPU support via OpenBLAS] ((https://github.com/ggerganov/whisper.cpp#blas-cpu-support-via-openblas )
24
25
- [ C-style API] ( https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h )
25
26
26
27
Supported platforms:
@@ -346,6 +347,18 @@ cp bin/* ../
346
347
347
348
Run all the examples as usual.
348
349
350
+ ## BLAS CPU support via OpenBLAS
351
+
352
+ Encoder processing can be accelerated on the CPU via OpenBLAS.
353
+ First, make sure you have installed `openblas`: https://www.openblas.net/
354
+
355
+ Now build `whisper.cpp` with OpenBLAS support:
356
+
357
+ ```
358
+ make clean
359
+ WHISPER_OPENBLAS=1 make -j
360
+ ```
361
+
349
362
## Limitations
350
363
351
364
- Inference only
You can’t perform that action at this time.
0 commit comments