-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot load vicuna-7b-delta-v0 #14
Comments
welcome to the Aspen' first issue |
模型是未恢复的delta还是已经恢复了的完整weight?如果只是delta是肯定不能加载的,你可以试一下mlora.py仅加载模型有没有问题
…---- Replied Message ----
| From | ***@***.***> |
| Date | 08/29/2023 22:03 |
| To | TUDB-Labs/multi-lora-fine-tune ***@***.***> |
| Cc | Subscribed ***@***.***> |
| Subject | Re: [TUDB-Labs/multi-lora-fine-tune] Cannot load vicuna-7b-delta-v0 (Issue #14) |
welcome to the Aspen' first issue
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
另外可以试一下mikecovlee_dev分支中支持8bit量化的版本
…---- Replied Message ----
| From | ***@***.***> |
| Date | 08/29/2023 22:03 |
| To | TUDB-Labs/multi-lora-fine-tune ***@***.***> |
| Cc | Subscribed ***@***.***> |
| Subject | Re: [TUDB-Labs/multi-lora-fine-tune] Cannot load vicuna-7b-delta-v0 (Issue #14) |
welcome to the Aspen' first issue
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
噢,我知道咋回事了,明天我推个patch |
Plz close this issue if problem solved. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Use
aspen.load_llama_tf_weight
to loadvicuna-7b-delta-v0
model use more than 30GB memory then caused OOM.Use
utils.convert_hf_to_pth
to transfer vicuna-7b-delta-v0 to .pth model, then useaspen.load_llama_7b_weight
to load .pth model, an error is reported:The text was updated successfully, but these errors were encountered: