Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NLL on test/train for SEQGan #218

Open
OlgaGolovneva opened this issue Sep 24, 2019 · 6 comments
Open

NLL on test/train for SEQGan #218

OlgaGolovneva opened this issue Sep 24, 2019 · 6 comments
Labels
enhancement New feature or request

Comments

@OlgaGolovneva
Copy link

Is it possible to add Likelihood-based Metrics on generated data for SeqGan evaluation? They are described in original paper and paper accompanying implementation you refer to (https://arxiv.org/pdf/1802.01886.pdf)

@gpengzhi gpengzhi added the question Further information is requested label Sep 26, 2019
@ZhitingHu
Copy link
Member

Evaluating likelihood is straightforward, with, e.g., texar.losses.sequence_sparse_softmax_cross_entropy. Here is an example of using the loss function:
https://github.com/asyml/texar/blob/master/examples/language_model_ptb/lm_ptb.py#L103-L106

@ZhitingHu ZhitingHu added enhancement New feature or request and removed question Further information is requested labels Sep 26, 2019
@OlgaGolovneva
Copy link
Author

Thanks a lot! Could you please help me also to figure out how I can change k, g, and d parameters (epochs and number of updates for discr training) mentioned in the original SeqGAN paper https://arxiv.org/pdf/1609.05473.pdf ?

@ZhitingHu
Copy link
Member

Discriminator training is by the function _d_run_epoch. You may customize it for more control.

The while-loop:

while True:
   try:
       # Training op here
   except: tf.errors.OutOfRangeError:
       break

is one-epoch training.

@OlgaGolovneva
Copy link
Author

Thank you! How can I control the number of mini-batch gradient steps that discriminator runs with the same generator input? In the while-loop, it first updates negative examples from generator, and than updates discriminator once with combination of positive and negative samples.

@ZhitingHu
Copy link
Member

You may make infer_sample_ids here as a TF placeholder, and feed the same generator sample when optimizing the discriminator for multiple steps.

@OlgaGolovneva
Copy link
Author

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants