Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the model can only train on size of [64, 64]? #99

Open
ohhh-yang opened this issue Aug 16, 2024 · 1 comment
Open

Why the model can only train on size of [64, 64]? #99

ohhh-yang opened this issue Aug 16, 2024 · 1 comment

Comments

@ohhh-yang
Copy link

When I try a size not equal to [64, 64], it comes below fault:
File "/home/PyTorch-VAE/models/vanilla_vae.py", line 122, in forward
mu, log_var = self.encode(input)
File "/home/PyTorch-VAE/models/vanilla_vae.py", line 91, in encode
mu = self.fc_mu(result)
File "/home/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "/home/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 dim 1 must match mat2 dim 0

@unistdJRZ
Copy link

see the models folder, all the model's latent sapce a setting to fit the feature map size that 64x64 image achived

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants