Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory-efficient implementation #9

Open
alisiahkoohi opened this issue Jun 6, 2020 · 4 comments
Open

Memory-efficient implementation #9

alisiahkoohi opened this issue Jun 6, 2020 · 4 comments

Comments

@alisiahkoohi
Copy link

Is there a plan to develop a memory-efficient back-propagation training mode? Perhaps a flag that by activating it, during back-propagation, the forward-pass network states get recomputed by inverting the network layer-by-layer, instead of storing them in the forward pass.

@Gword
Copy link

Gword commented Sep 12, 2020

Hello, I am trying to implement a Memory-efficient implementation you mentioned. Could you tell me if you have seen the source code of this implementation in pytorch, thank you!

@alisiahkoohi
Copy link
Author

Hello, I am aware of other libraries that provide this functionality. For example see MenCNN.

@psteinb
Copy link
Collaborator

psteinb commented Dec 22, 2020

But if I understand MemCNN correctly, they have a different architecture to construct the normalizing flow, don’t they? So this approach cannot be easily put to use within existing infrastructure of Freia.
Haven’t digged too deep in either libraries to have a good idea of the feasibility.

@ardizzone
Copy link
Member

ardizzone commented Feb 9, 2021

I think it would be possible. Before FrEIA, we had already implemented this in some home-made normalizing flows.
Because this is a larger feature, I am moving the issue to https://github.com/VLL-HD/FrEIA-community

Perhaps we can get this done in the next weeks, at least for the most common modules (specifically AllInOneBlock, which combines the standard coupling-scaling-permuting that has become standard in the literature)

@ardizzone ardizzone transferred this issue from vislearn/FrEIA Feb 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants