Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

max_size parameter, does it impact training #38

Open
mark-joe opened this issue Jun 5, 2018 · 2 comments
Open

max_size parameter, does it impact training #38

mark-joe opened this issue Jun 5, 2018 · 2 comments

Comments

@mark-joe
Copy link

mark-joe commented Jun 5, 2018

Not really an issue, I'm just puzzled where max_size is used for. It serves as the size of an ImagePool pool and is used to store 'fake outputs'. Used here:

` # Update G network and record fake outputs
fake_A, fake_B, _, summary_str = self.sess.run(
[self.fake_A, self.fake_B, self.g_optim, self.g_sum],
feed_dict={self.real_data: batch_images, self.lr: lr})
self.writer.add_summary(summary_str, counter)
[fake_A, fake_B] = self.pool([fake_A, fake_B])

            # Update D network
            _, summary_str = self.sess.run(
                [self.d_optim, self.d_sum],
                feed_dict={self.real_data: batch_images,
                           self.fake_A_sample: fake_A,
                           self.fake_B_sample: fake_B,
                           self.lr: lr})
            self.writer.add_summary(summary_str, counter)

`
Does the size influence training? Default it is set to 50. Any ideas on this?
Thanks!

@Deeplearning20
Copy link

Hello, I have the same question as you. Could you have solved it?

@starcream
Copy link

The original cycle_gan paper adopts this idea to keep a image buffer that stores 50 previously created images.Yet I have no idea why it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants