You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I came in to write the same thing.
Besides, now the loss is accumulating on every butch. More precisely, the loss / batchsize value is accumulated.
It seems to me more logical to output the loss value on the last batch of training.
MSEloss by default uses 'mean' as reduction method, so I think
epoch_loss += (loss.detach().item() / batchsize)
is incorrectThe text was updated successfully, but these errors were encountered: