-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixit can be incredibly slow. Are there any recommendations available for profiling checks? #457
Comments
Somewhat answering my own question, fixit can generate debug logs for how long each execution of each plugin hook takes. But the format of those logs isn't very useful with a large number of files (and, tbh, figuring out how to configure the logging framework to output those logs wasn't something I tried to do).
Then analyse with pandas:
|
There are a few common paths that I know are slow, but you might be hitting some others depending on what your rules are using. I appreciate #458 with the good repro, I'll give some general thoughts here and then try to look deeper on that one. I hacked up a quick draft of tracing that works for single-process mode only so far. Here's a screenshot of a sample run ( Most of the per-file time with this configuration is not spent in rules (tiny unnamed bars below I suspect this is in metadata calculation (e.g. scope or qualified name) but that is probably more of a design inefficiency in libcst than something that's immediately fixable. Giving @zsol a heads-up that a keke PR for libcst might be on the way. Good news (if you want to look at it that way) is that this is cpu-bound (the green at the top is actually a graph, pegged at 100%) and that there doesn't immediately appear to be any external bottleneck like I/O. For a larger number of files, you probably also start to notice the For |
Happy to merge a keke PR for libcst, I was about to release a new version tomorrow but can wait a bit |
Thanks for the response @thatch 👍
Yes, I agree, and that aligns with most of my testing too.
Yah, but at 6 seconds for 40,000 files with gitignore filtering as a once off cost is not horrible, though I did notice that trailrunner will still traverse into the top level of ignored directories like I've updated https://github.com/jarshwah/experiments-fixit today with some more analysis, which essentially agrees that the bulk of the work is occurring within
|
I have further trace info available if you install
I'm not sure any of these are ready to merge, but they provide a starting point to understand where the time comes from. Responding to your theories:
Where I think the time is spent: My current thoughts:
Note I don't currently have gc time spans working for multiprocessing workers, and would guess it accounts for ~10% of the total time (and likely the "NoR" oddity near the right side is a gc pause). In addition, |
4ms per file, for 40,000 files, is still 160 seconds.
As above, this is per-file, so accounts for 80 seconds total.
As per https://github.com/jarshwah/experiments-fixit?tab=readme-ov-file#multiprocessing-pool chunking the tasks across multi-processing rather than serializing/deserializing per-task can be more performant in some cases, though I didn't see much of an improvement here, likely because so much time is spent elsewhere. Might be worth re-examining once other avenues are sorted. While I agree these are not the primary bottlenecks and investigation is best spent on the functions you've identified, the first two are still doing lots more work than they need to, and could be tackled separately (which I'm happy to help with where time permits). |
User time, sure. But not wall time -- this should parallelize well. If you have ways to improve this, go for it. |
Sorry coming back to this:
I can't replicate this, even with a much larger module or with the pure python parser.
I suspect that 400ms parse time is for all 40,000 files? For a relatively simple but non-empty module, I see approximately 3-4ms per-file with the rust-based parser. |
As per the title, we're seeing incredibly long run times when running fixit==2.1.0.
Our code base is approx 4M lines of python code. On a Circle CI Large instance (4 CPU / 8gb), it takes up to an hour to run across the entire code base. We've worked around this by only running on changed files, which takes approximately 1 minute, except when we edit the fixit config or any of our custom rules where we'll run the entire suite.
I'm keen to try and understand why the runs may be slow by profiling in some way. Is there any tooling or logging available within fixit itself that might help to track down where some of the problems may be? To be clear, I'm coming at this from the assumption that the way we've written our rules is mostly to blame, and would like to identify the worst offenders.
While I'm not really able to share our implementations, I can provide some details of our current usage (if that might prompt some thoughts of "oh, of course that would be slow!":
We make heavy use of QualifiedNameProvider and FilePathProvider.
We have lots of helper functions that presumably construct lots of instances, like so:
Still, I'm more interested in being able to analyse performance and profiling data myself rather than guessing, so anything in this space would be incredibly helpful.
The text was updated successfully, but these errors were encountered: