|
10 | 10 |
|
11 | 11 | Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) in pure C/C++
|
12 | 12 |
|
13 |
| -> [!IMPORTANT] |
14 |
| -[2024 Jun 12] Binaries have been renamed w/ a `llama-` prefix. `main` is now `llama-cli`, `server` is `llama-server`, etc (https://github.com/ggerganov/llama.cpp/pull/7809) |
15 |
| - |
16 | 13 | ## Recent API changes
|
17 | 14 |
|
18 |
| -- [2024 Jun 26] The source code and CMake build scripts have been restructured https://github.com/ggerganov/llama.cpp/pull/8006 |
19 |
| -- [2024 Apr 21] `llama_token_to_piece` can now optionally render special tokens https://github.com/ggerganov/llama.cpp/pull/6807 |
20 |
| -- [2024 Apr 4] State and session file functions reorganized under `llama_state_*` https://github.com/ggerganov/llama.cpp/pull/6341 |
21 |
| -- [2024 Mar 26] Logits and embeddings API updated for compactness https://github.com/ggerganov/llama.cpp/pull/6122 |
22 |
| -- [2024 Mar 13] Add `llama_synchronize()` + `llama_context_params.n_ubatch` https://github.com/ggerganov/llama.cpp/pull/6017 |
23 |
| -- [2024 Mar 8] `llama_kv_cache_seq_rm()` returns a `bool` instead of `void`, and new `llama_n_seq_max()` returns the upper limit of acceptable `seq_id` in batches (relevant when dealing with multiple sequences) https://github.com/ggerganov/llama.cpp/pull/5328 |
24 |
| -- [2024 Mar 4] Embeddings API updated https://github.com/ggerganov/llama.cpp/pull/5796 |
25 |
| -- [2024 Mar 3] `struct llama_context_params` https://github.com/ggerganov/llama.cpp/pull/5849 |
| 15 | +- [Changelog for `libllama` API](https://github.com/ggerganov/llama.cpp/issues/9289) |
| 16 | +- [Changelog for `llama-server` REST API](https://github.com/ggerganov/llama.cpp/issues/9291) |
26 | 17 |
|
27 | 18 | ## Hot topics
|
28 | 19 |
|
29 |
| -- **`convert.py` has been deprecated and moved to `examples/convert_legacy_llama.py`, please use `convert_hf_to_gguf.py`** https://github.com/ggerganov/llama.cpp/pull/7430 |
30 |
| -- Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021 |
31 |
| -- BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920 |
32 |
| -- MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387 |
33 |
| -- Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 |
34 |
| -- Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 |
35 |
| -- Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017 |
36 |
| -- Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981 |
37 |
| -- Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962 |
38 |
| -- Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328 |
| 20 | +- *add hot topics here* |
39 | 21 |
|
40 | 22 | ----
|
41 | 23 |
|
|
0 commit comments