convert.py ending with "Killed" at lm_head layer #178
Unanswered
Christopheraburns
asked this question in
Q&A
Replies: 1 comment
-
Have you checked the data integrity of your model file? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Python 3.10.12 | Ubuntu20.04 | CUDA 11.8 | NVIDIA RTX ADA 4000 (20gb)
I am converting the new zephyr-7b (beta) with exLlamav2 with the following syntax:
python exllamav2/convert.py \ -i zephyr-7b-beta \ -o quant \ -c wikitext-test.parquet \ -b 5.0
It seems to quantize fine but at the lm_head layer it ends with "Killed". If I try to load the quantized model with the included test_inference.py I get the error:
ValueError: ## Could not find lm_head.* in model
Which makes sense since given where the process is killed. How can I troubleshoot this further? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions