Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sampling from z #6

Open
extragoya opened this issue May 3, 2021 · 1 comment
Open

Sampling from z #6

extragoya opened this issue May 3, 2021 · 1 comment

Comments

@extragoya
Copy link

I noticed that the sampling function seems to be out of date:

However, I'm having trouble doing a proper reverse sampling - for instance if use the following code snippet, I am unable to recreate the input from the output:

z_fc, z_conv = self.model(x)
with torch.no_grad():
    rev_ims = self.model([z_fc, z_conv], rev=True)

x will be in the range [0,1], but rev_ims will be in the range of roughly [-100, 100]. Is there something I'm doing trivially wrong here?

@extragoya
Copy link
Author

I figured out the issue: I believe it's due to different statistics used in the BN of the subnetworks. This can cause very different activation maps. E.g., if you do forward in training mode, but sample backward in evaluation mode, the recreated input can be orders of magnitude different in scale. Likewise if you do forward with a full batch in training model but sample backward with a sub-batch.

Have you tried using something like InstanceNorm instead, given the impact of 'non-invertibility' if the BN is used in eval/training mode or with differently sized batches in training mode?

Anyway, I leave this issue open for now, in case my diagnosis was incorrect. Feel free to close. Any insights in the impact of BN would also be welcome - thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant