-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About fine-tuning #2
Comments
Thanks for your interest in our work and sorry for the late reply. The released code does not include fine-tuning the LM for generation. But technically, you can use any fine-tuned LMs for ZeroGen, including Japanese LMs. |
I see. Thank you so much for your guidance. |
I have another question about fine tune. |
For the first question, the answer is no, you don't need to train a Japanese LM based on cambridgeltl/magic_mscoco. It's enough to just fine-tune a pre-trained Japanese LM on the textual captions of any caption datasets you want. For the second question, I believe there're plenty of open-source codes on GitHub about fine-tuning Japanese LMs. Unfortunately, I'm not an expert of Japanese or Japanese LM, so I have no experience on fine-tuning a Japanese LM. I'm afraid you need to find the code and model by your own then. But please don't hesitate to ask any questions that you may encounter when applying ZeroGen to other LMs, and you're always welcome to open a PR to integrate them into ZeroGen :) |
I see. Thank you so much for your information. |
Are there codes for fine-tuning a new model by my own training data ?
Is it possible to use training data in Japanese (text data) directly ?
The text was updated successfully, but these errors were encountered: