From afef670cbb474f0ec5ff1e4eb5da48a09d62299c Mon Sep 17 00:00:00 2001 From: FireFragment <55660550+FireFragment@users.noreply.github.com> Date: Tue, 12 Sep 2023 21:27:13 +0200 Subject: [PATCH] Fix model conversion command in LLAMA guide (#215) --- website/docs/llama-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/llama-tutorial.md b/website/docs/llama-tutorial.md index 75e46c0c..b1fc46b2 100644 --- a/website/docs/llama-tutorial.md +++ b/website/docs/llama-tutorial.md @@ -188,7 +188,7 @@ pip install -r requirements.txt With the Python dependencies installed, you need to run the conversion script that will convert the Alpaca model to a binary format that llama.cpp can read. To do that, run the following command in your terminal: ``` -python convert.py /models/alpaca-native +python convert.py ./models/alpaca-native ``` This will run the `convert.py` script that is located in the `llama.cpp` directory. The script will take the Alpaca model directory as an argument and output a binary file called `ggml-model-f32.bin` in the same directory.