-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request: solving the lack of incremental reporting in loops / functions #31
Comments
Hi Stas, Thanks for your detailed feature description. I would like to propose a sample to make sure I get the point. statement unrolling:Suppose I have such a funtion: @profile
def func():
net = torch.nn.Linear(5,5)
x = torch.Tensor(5,5)
for _ in range(10):
x = net(x) The expected output should be: def func():
for _ in range(10): # first hit
x = net(x)
for _ in range(10): # second hit
x = net(x)
for _ in range(10): # third hit (probably)
x = net(x)
# ... function unrollingIf you want to profile such a function like def inner(x):
x = net(x)
return x
def outter(x):
for _ in range(10):
x = inner(x) you can simply add def inner(x):
x = net(x)
return x
def inner(x):
x = net(x)
return x
def inner(x):
x = net(x)
return x
def inner(x):
x = net(x)
return x |
For functions, yes, for loops, no, we want to do the same as the function, so that the body of the loop could be profiled. That is we write it symbolically:
output:
I hope I was able to explain in the OP how one doesn't get the incremental info when the loop is repeated multiple times. One workaround would be to turn the body of the loop into a function and profile it, but that may require significant alternations to the user's code. |
This is a great tool for finding where the memory has gone - thank you!
I have a request:
Problem:
Memory is being reported incorrectly for any loop or a function, since those are repeated multiple times, it doesn't show how the peak/active memory counters were progressing in, say, first iteration and shows the data for the whole loop/function after it has run more than once. It's correct for the final iteration, but not the first one and it's crucial to see the first moment the memory usage has gone up and the peak.
This functionality is typically not needed for a normal memory profiler, since all we want to know there is frequency of calls and total usage for the given line, but here if it's an investigation tool we need to see the first few uses. I hope I was able to convey the issue clearly.
I tried to solve this manually by unrolling the loop in the code I was profiling and replicating the iteration code multiple times, which is not very sustainable.
It's also typical that there is a change in memory footprint from iteration 1 to 2 and then things stabilize at iteration 3 and onward (if there is no leak that is). So probably there could be an option to record and prints 3 different stats:
same applies to functions.
I'm thinking perhaps the low-hanging fruit is to give users an option to record a loop iteration or function just the first time it's run and report that. That already would be very useful and perhaps not to difficult to implement.
Thank you!
The text was updated successfully, but these errors were encountered: