Skip to content

Commit 4e16a8f

Browse files
author
0xsourcecode
authored
readme : highlight OpenBLAS support (ggerganov#956)
* highlight openblas support * Update README.md
1 parent 77eab3f commit 4e16a8f

File tree

1 file changed

+13
-0
lines changed

1 file changed

+13
-0
lines changed

README.md

+13
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
2121
- Runs on the CPU
2222
- [Partial GPU support for NVIDIA via cuBLAS](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
2323
- [Partial OpenCL GPU support via CLBlast](https://github.com/ggerganov/whisper.cpp#opencl-gpu-support-via-clblast)
24+
- [BLAS CPU support via OpenBLAS]((https://github.com/ggerganov/whisper.cpp#blas-cpu-support-via-openblas)
2425
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
2526

2627
Supported platforms:
@@ -346,6 +347,18 @@ cp bin/* ../
346347
347348
Run all the examples as usual.
348349
350+
## BLAS CPU support via OpenBLAS
351+
352+
Encoder processing can be accelerated on the CPU via OpenBLAS.
353+
First, make sure you have installed `openblas`: https://www.openblas.net/
354+
355+
Now build `whisper.cpp` with OpenBLAS support:
356+
357+
```
358+
make clean
359+
WHISPER_OPENBLAS=1 make -j
360+
```
361+
349362
## Limitations
350363
351364
- Inference only

0 commit comments

Comments
 (0)