-
-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extra slow pytorch imports (~30s) #889
Comments
Thanks for the report. We've been able to reproduce this locally and are looking into it. |
In the interim, as a work-around, you can specify |
Likewise reproduced, also with around a 50x slowdown. |
On |
I do not see this performance degradation at |
Problem was introduced in |
I've been looking into this more and the root problem has to do with how and when the interpreter calls the tracing function. At the moment, it seems that the C logic that decides when to disable the I'm making headway on figuring out how precisely to do this disabling, CPython actually checks in several different places and manners to see whether a trace callback exists and whether to execute it. This is governed in both the |
Describe the bug
Pytorch related imports are taking an extra long time to resolve (15x longer?) when using scalene vs python.
To Reproduce
I have a simple test.py file, which is just
import torch
.Run
scalene test.py
Wait ~30s for report to finish
Run
python test.py
Wait ~2s
Screenshots
** Versions **
I enabled gpu with scalene
The text was updated successfully, but these errors were encountered: