-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Issues: Mozilla-Ocho/llamafile
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Bug: Enabling GPU support on Debian 12 (Bookworm) with AMD doesn't work
bug
medium severity
#662
opened Dec 21, 2024 by
vadz
Feature Request: When https://huggingface.co/Mozilla/Qwen-2.5-7B-Chat-llamafile
enhancement
#659
opened Dec 19, 2024 by
bphd
4 tasks done
Bug: can not load model if btrfs subvolume is in path
bug
medium severity
#658
opened Dec 17, 2024 by
mounta11n
Bug: Crashes with SIGABRT after about 2 hours
bug
medium severity
#653
opened Dec 7, 2024 by
bjornbm
Bug: Broken display in Hebrew letters for chat interface in llamafiler-0.8.17
bug
high severity
#646
opened Dec 1, 2024 by
NHLOCAL
Bug: llamafiler /tokenize endpoint with add_special does not add special tokens
awaiting response
bug
medium severity
#643
opened Nov 26, 2024 by
k8si
Firefox AI features (Summarize, etc.) does not work with llamafile
#640
opened Nov 25, 2024 by
TFWol
Bug: Port collision when running multiple models
bug
medium severity
#636
opened Nov 22, 2024 by
heaversm
Bug: Why llamafile don't remove end token like <|eot_id|> or <end_of_turn>?
bug
low severity
#630
opened Nov 15, 2024 by
jeezrick
Feature Request: Support AVX-512 for Intel Rocket Lake
enhancement
#628
opened Nov 14, 2024 by
aluklon
4 tasks done
Bug: Shared memory not working, results in Segfault
bug
critical severity
#611
opened Nov 7, 2024 by
abishekmuthian
Bug: segfault loading models with KV quantization and related problems
bug
high severity
#610
opened Nov 5, 2024 by
mseri
Bug: GPU Acceleration Not Working with Ccache installed for 2nd User on Linux
bug
medium severity
#609
opened Nov 5, 2024 by
lovenemesis
Feature Request: CORS fallback for OpenAI API compatible endpoints
enhancement
#600
opened Oct 24, 2024 by
DK013
4 tasks done
Bug: llamafiler /v1/embeddings endpoint does not return model name
bug
low severity
#589
opened Oct 14, 2024 by
wirthual
Bug:
--path
Option Broken When Pointing to a Folder
bug
high severity
#588
opened Oct 13, 2024 by
gorkem
Bug: binary called ape in PATH breaks everything
bug
high severity
#587
opened Oct 13, 2024 by
step21
Bug: Phi3.5-mini-instruct Q4 K L gguf based llamafile CuDA error AMD iGPU
bug
high severity
#584
opened Oct 10, 2024 by
eddan168
Feature Request: /v1/models endpoint for further openai api compatibility
enhancement
#583
opened Oct 10, 2024 by
quantumalchemy
4 tasks done
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.