-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine tuning on a dataset #26
Comments
Hi, I'm not sure what you mean by
For finetuning, plz try to use this method. OpenLRM/openlrm/runners/train/base_trainer.py Line 206 in c2260e0
|
Hi @ZexinHe , thank you for the advise. Also, I have no idea how to prepare thos It would be greate if you can help me. |
I'm not so sure but following the code, in openlrm/datasets/base.py in line 46 expects a file path to a json with the uuids. So i would try to create a json file with ["uid1","uid2",...]. #33 And reference by path to that json with the path meta_path: |
Thank you for your kind reply, @juanfraherrero ! |
Hi @juanfraherrero , is it possible to finetune the pretrained LRM models with my custom small dataset? |
Hi @hayoung-jeremy , I tried to train some epochs, but i don't have enough vram (always cuda-out-of-memory) to even load the model. About the guideline, I followed the instructions in the readme, first prepare your data, then train. aBut as i said i can´t load the model so i don't know if I did it right. Sorry! Good Luck. |
Hi @juanfraherrero , thank you for your kind reply! I've tried fine-tuning using the base model provided by OpenLRM. |
anyone can give us some insights of the time taken to prepare the training data? thanks |
The text was updated successfully, but these errors were encountered: