Replies: 1 comment
-
You can try LLMtuner, It provides peft + lora etc in few lines of code. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
faster-whisper is a wonderful solution for faster inference but fine tune still need a better solution. I have rent a A6000 to fine-tune a 5.5 hours dataset, it cost 7 hours to train and need 44.8G rams by using the method of Sanchit Gandhi, I don't know what will cause if I add amout of the dataset. I have search a method named peft + lora + dnd to do faster train, but it doesn't support windows and can not convert to ctranslate2. So anyone gives some suggestion, thank you.
Beta Was this translation helpful? Give feedback.
All reactions