-
Notifications
You must be signed in to change notification settings - Fork 907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault (core image saved) #152
Comments
Please review the following PR that I opened yesterday: #142. The most likely root cause of the problem is that you either do not have a sufficient amount of RAM or you have at least the minimum required RAM. However, the available RAM prior to loading the weight set was insufficient due to various reasons, such as background tasks, etc. |
I have 16gb of RAM in total. And from what I investigated, the probrability of the problem being related to my memory is certain. (I haven't tested it yet) But I'm thinking of changing the language model to a lighter one to see if it solves the problem. I don't have much skill in C/C++ to investigate deeply the cause, like ggml exploitation. I read your PR. Thanks for your feedback! |
* Add AVX2 version of ggml_vec_dot_q4_1 * Small optimisations to q4_1 dot product (@Const-me) * Rearrange Q4_1 quantization to work for multipart models. (Fix antimatter15#152) * Fix ggml_vec_mad_q4_1 too * Fix non-vectorised q4_1 vec mul
Whenever I try to execute the code on my machine I get the error: "Segmentation failure (core image written)" I know that usually this failure indicates that the program tried to access an area of memory that was not allocated or is not accessible.
I have tried debugging the code on my machine but I still get stuck with this problem.
I am using Arch Linux to run.
(base) [andre@archlinux alpaca.cpp]$ ls
build chat.cpp convert-pth-to-ggml.py ggml.c ggml.o Makefile quantize.sh screencast.gif utils.h
chat CMakeLists.txt ggml-alpaca-7b-q4.bin ggml.h LICENSE quantize.cpp README.md utils.cpp utils.o
(base) [andre@archlinux alpaca.cpp]$ ./chat
main: seed = 1679698347
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.34 MB
Falha de segmentação (imagem do núcleo gravada)
The text was updated successfully, but these errors were encountered: