-
Notifications
You must be signed in to change notification settings - Fork 58
a maybe forgotten log function #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I have a similar question about the loss calculation, we have: et = torch.mean(torch.exp(M(z_bar, x_tilde)))
M.ma_et += ma_rate * (et.detach().item() - M.ma_et)
mutual_information = torch.mean(M(z, x_tilde)) - torch.log(et) * et.detach() /M.ma_et What I don't understand is the addition of thanks. |
I have also noticed this similar question. I am confused about the item of et.detach() /M.ma_et and it will be very kind of you to help me understand it |
Maybe I can reply the questions above.I think "loss = -(torch.mean(t) - (1/ma_et.mean()).detach()*torch.mean(et))"in the MINE.ipynb is equal to |
Anyone still looks at this? @diweiqiang @yassouali @yrchen92 I think, from the perspective of gradients:
and
should deliver the same gradients. Both have considered the correction of the biased gradients. But the loss values are different. The first one is not a lower bound of MI, while the second one is. Pleas correct me if I am wrong. |
in function "learn_mine" of file MINE.ipynb with expression"loss = -(torch.mean(t) - (1/ma_et.mean()).detach()*torch.mean(et))", do you forget a torch.log in torch.mean(et) or not, sorry if it is as it is
The text was updated successfully, but these errors were encountered: