You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your interesting work.
Could you provide more detailed guidance on the content of Notice? Should I change LlamaForCausalLM to AutoModel on line163 in finetune_kg.py? But this will give the error.
In addition, could you please provide the memory of your A800? I cannot run on a GPU with 48GB of memory.
The text was updated successfully, but these errors were encountered:
You can modify the code to:
'''
model = AutoModelForCausalLM.from_pretrained(
base_model,
#load_in_8bit=True,
torch_dtype=torch.float32,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
'''
And comment out the code: shutil.copyfile(base_model...)
The memory of A800 is about 80G. And I recommend that you adjust the mini_batch_size to 2 or 1. But this may affects the final results.
Thanks for your interesting work.
Could you provide more detailed guidance on the content of Notice? Should I change
LlamaForCausalLM
toAutoModel
on line163 infinetune_kg.py
? But this will give the error.In addition, could you please provide the memory of your A800? I cannot run on a GPU with 48GB of memory.
The text was updated successfully, but these errors were encountered: