Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why I use the same code and dataset to train the model,then use the context from train set to generate QA pairs, but they are not as good as the train set(eg. only one or two is good in four generated QA pairs)? #4

Open
CoderJoyce opened this issue Aug 7, 2020 · 2 comments
Assignees

Comments

@CoderJoyce
Copy link

No description provided.

@seanie12
Copy link
Owner

seanie12 commented Aug 7, 2020

hi, how did you generate the QA pairs?
If you sample multiple latent variables, the quality of some generated QA pairs are not good. So we refine them with QA model

@CoderJoyce
Copy link
Author

I ran the code generate_qa.py to generate QA pairs.
How can I refine them with QA model? Is there any code available?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants