We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In https://gluon.mxnet.io/chapter14_generative-adversarial-networks/dcgan.html#
with autograd.record(): # train with real image output = netD(data).reshape((-1, 1)) errD_real = loss(output, real_label) metric.update([real_label,], [output,]) # train with fake image fake = netG(latent_z) output = netD(fake.detach()).reshape((-1, 1)) errD_fake = loss(output, fake_label) errD = errD_real + errD_fake errD.backward() metric.update([fake_label,], [output,])
I am confused by this line of code
fake = netG(latent_z)
In my opinion, this line is used to generate fake data, and netG should in predict mode, am I right? If I am right, since fake = netG(latent_z) is in the context of with autograd.record():, which imply that netG is in train mode . (https://gluon.mxnet.io/chapter03_deep-neural-networks/mlp-dropout-gluon.html#Integration-with-autograd.record)
netG
predict mode
with autograd.record():
train mode
The text was updated successfully, but these errors were encountered:
No branches or pull requests
In https://gluon.mxnet.io/chapter14_generative-adversarial-networks/dcgan.html#
I am confused by this line of code
In my opinion, this line is used to generate fake data, and
netG
should inpredict mode
, am I right?If I am right, since
fake = netG(latent_z)
is in the context ofwith autograd.record():
, which imply thatnetG
is intrain mode
. (https://gluon.mxnet.io/chapter03_deep-neural-networks/mlp-dropout-gluon.html#Integration-with-autograd.record)The text was updated successfully, but these errors were encountered: