You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running this code in colab (python3), and I'm changing several things to accomodate to my settings.
Everything works fine until now, except for when running script for training:
# in /models/iq.py ,
# the error occurs specifically in line 119,
eps = Variable((Normal(torch.zeros_like(mu).cuda(), self.alpha.data.pow(-1))).sample())
due to:
# line 82
self.alpha = nn.Parameter(torch.randn(z_size)) # may contain negative values.
I realized STD in Normal distribution should be positive, and along with this, gradient suddenly explodes, and loss becomes nan.
Thus, I introduced following lines of codes.
# Replace line 119 with:
d = self.alpha.data.pow(-1)
d = torch.nan_to_num(d.clamp(min=1e-4, max=2), 1e-4) # using 2 instead of 1e-4 for replacing nan causes gradient explosion.
eps = Variable((Normal(torch.zeros_like(mu).cuda(), d)).sample())
But still gradient suddenly explodes.
Do you have any suggestions?
Hello :)
Thank you for sharing this amazing code!
I'm running this code in colab (python3), and I'm changing several things to accomodate to my settings.
Everything works fine until now, except for when running script for training:
due to:
I realized STD in Normal distribution should be positive, and along with this, gradient suddenly explodes, and loss becomes nan.
Thus, I introduced following lines of codes.
But still gradient suddenly explodes.
Do you have any suggestions?
Below is the training log.
The text was updated successfully, but these errors were encountered: