Skip to content

Commit

Permalink
Fix model conversion command in LLAMA guide (#215)
Browse files Browse the repository at this point in the history
  • Loading branch information
FireFragment authored Sep 12, 2023
1 parent d1c2abf commit afef670
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion website/docs/llama-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ pip install -r requirements.txt
With the Python dependencies installed, you need to run the conversion script that will convert the Alpaca model to a binary format that llama.cpp can read. To do that, run the following command in your terminal:

```
python convert.py /models/alpaca-native
python convert.py ./models/alpaca-native
```

This will run the `convert.py` script that is located in the `llama.cpp` directory. The script will take the Alpaca model directory as an argument and output a binary file called `ggml-model-f32.bin` in the same directory.
Expand Down

0 comments on commit afef670

Please sign in to comment.