Skip to content

Actions: ggerganov/llama.cpp

CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
11,707 workflow runs
11,707 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

add ggml_backend_sched_dump_dot
CI #17785: Pull request #10825 synchronize by foldl
December 18, 2024 13:14 58m 28s foldl:add_sched_dot_dump
December 18, 2024 13:14 58m 28s
server : fix logprobs, make it OAI-compatible
CI #17784: Pull request #10783 synchronize by ngxson
December 18, 2024 13:11 32m 0s ngxson:xsn/fix_logprobs
December 18, 2024 13:11 32m 0s
tts : add OuteTTS support
CI #17783: Pull request #10784 synchronize by ggerganov
December 18, 2024 12:14 51m 28s gg/tts-add-outetts
December 18, 2024 12:14 51m 28s
tts : add OuteTTS support
CI #17782: Pull request #10784 synchronize by ggerganov
December 18, 2024 12:05 10m 2s gg/tts-add-outetts
December 18, 2024 12:05 10m 2s
Support for Llama-3_1-Nemotron-51B
CI #17781: Pull request #10669 synchronize by ymcki
December 18, 2024 12:02 24m 43s ymcki:master
December 18, 2024 12:02 24m 43s
server : fix logprobs, make it OAI-compatible
CI #17780: Pull request #10783 synchronize by ngxson
December 18, 2024 11:47 1h 10m 11s ngxson:xsn/fix_logprobs
December 18, 2024 11:47 1h 10m 11s
server : output embeddings for all tokens when pooling = none
CI #17777: Pull request #10861 synchronize by ggerganov
December 18, 2024 09:34 1h 8m 56s gg/server-embeddings-all
December 18, 2024 09:34 1h 8m 56s
server : add "tokens" output (#10853)
CI #17775: Commit 0e70ba6 pushed by ggerganov
December 18, 2024 09:05 2h 12m 29s master
December 18, 2024 09:05 2h 12m 29s
server : (embeddings) using same format for "input" and "content" (#1…
CI #17774: Commit 4682887 pushed by ggerganov
December 18, 2024 08:55 2h 22m 44s master
December 18, 2024 08:55 2h 22m 44s
Add Falcon3 support and Fix issue #10875
CI #17773: Pull request #10883 synchronize by mokeddembillel
December 18, 2024 08:20 2h 13m 36s mokeddembillel:falcon3_integration
December 18, 2024 08:20 2h 13m 36s
server : add "tokens" output
CI #17772: Pull request #10853 synchronize by ggerganov
December 18, 2024 08:17 1h 50m 16s gg/server-content-tokens
December 18, 2024 08:17 1h 50m 16s
server : add "tokens" output
CI #17770: Pull request #10853 synchronize by ggerganov
December 18, 2024 08:04 12m 59s gg/server-content-tokens
December 18, 2024 08:04 12m 59s
server : add "tokens" output
CI #17769: Pull request #10853 synchronize by ggerganov
December 18, 2024 08:02 2m 48s gg/server-content-tokens
December 18, 2024 08:02 2m 48s
Revert "llama : add Falcon3 support (#10864)" (#10876)
CI #17764: Commit 4da69d1 pushed by slaren
December 18, 2024 00:36 39m 14s master
December 18, 2024 00:36 39m 14s
Revert "Add Falcon3 model support"
CI #17763: Pull request #10876 opened by slaren
December 17, 2024 22:31 46m 28s revert-10864-falcon3_integration
December 17, 2024 22:31 46m 28s
Use model->gguf_kv for loading the template instead of using the C AP…
CI #17762: Commit d62b532 pushed by slaren
December 17, 2024 22:24 53m 33s master
December 17, 2024 22:24 53m 33s