-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation for pl.LightningModule that includes many nn.Modules #28
Comments
Here is an example: In my code (not the colab above, but a similar style), I don't OOM when I create the model. I OOM when I run
How do I memory profile why I OOM? |
THX for reporting. I'll investigate the integration with pytorch lightning in this weekend. But in principle, the only thing need to be done is to add the forward function into the line_profiler. |
It looks like our current implementation cannot profiling the detailed memory usage inside class Net(pl.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv1D(xxx)
@profile
def forward(self, input):
out = self.conv1(input)
return out |
@Stonesjtu if I have an nn.Module that contains other nn.Modules (which in turn contain other nn.Modules), do I add @Profile decorator to all nn.Modules to see what is happening? Thank you for the help. |
A common workflow is to profile top-down. Usually 2 or 3 |
@Stonesjtu wanted to ping on this issue to see if there is a better way to use memlab with lightning now. |
@turian Does the MemReporter work for you? It says it is supposed to work recursively on more complicated nn.Modules. |
I have a pl.LightningModule (pytorch-lightning) that includes many nn.Modules.
It's not obvious from the documentation how I can profile all the LightningModule tensors and the subordinate Module tensors. Could you please provide an example?
The text was updated successfully, but these errors were encountered: