-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
“Learn” to avoid trying routes that do not affect the dependency graph #64
Comments
I'm not sure I understand. You say "setuptools does not have dependencies", but surely we can only say that "this specific version of setuptools does not have dependencies"? If we backtrack and try a different version of setuptools, we might get one that does have dependencies. Am I missing something? (It's early and I haven't had a cup of tea yet, so it's quite likely 🙂) |
Yes, sorry for the lax language, I was in a hurry writing the message. What I have in mind is the resolver should be able to look at (say) setuptools 40, and realise the rest of the dependency graph after pinning it would be exactly the same as setuptools 40.1. So if 40.1 failed, 40.0 is also destined to fail, and should be skipped entirely. But if setuptools x does have dependencies (or more generally, have a different set of dependencies from other versions we’ve tried), then we should try it. |
Ah, OK. So if we fail on 40.1, then the resolver can ask the provider "does 40.0 have the same dependencies?" and ignore 40.0 if the answer is "no". I can see how that would help, but would it not be almost as good (and not require an additional mechanism) if the provider cached dependency data in memory, so that checking 40.0 would be fast enough that skipping it is not worth it? That would mean breaking pathological cases like projects that calculate dependencies randomly, but IMO it's fine for a provider to make that choice (and I'd be fine with pip doing so, as we already cache wheels which in effect does that). Of course, it's not an either/or choice - we can do both. |
IIUC, This is essentially the CDCL style behaviour - I'm not a 100% sure I understand how we'd phrase that in terms of criterion-based model we have here, but I'm a strong +1 on this. |
The main things this is trying to avoid is not setuptools itself, but the states derived after setuptools 40.04 is pinned. If the resolution process is thought as a maze, setuptools 41.0 and 40.0 are two doors that lead to the same room. With naive backtracking, we’ll need to check all the doors that room has every time we enter it, but if we can somehow identify that the two doors (versions) bring us to the same room (state of criteria), we can avoid exploring the room the second time altogether. This can be done after we enter the room (i.e. pinning setuptools 40.0), but since each resolution round in resolvelib is a process of pinning + adding dependencies, doing it after entering the room (pinning) means we somehow need to abort a round half way; it is probably easier to “peek” before we walk through the setuptools 40.0 door (I think). The main challenge here would be to implement an efficient way to identify whether we indeed are entering the same room (i.e. implement equality on the criteria-held requirements in a state). And yes, this is basically Conflict Driven Clause Learning (CDCL). We can express each resolvelib as an unsatisfied boolean formula,[citation needed] and everything beside |
Cool. I checked the Wikipedia entry on CDCL, and I definitely prefer your explanation 🙂 |
Whoops! I should've used more words, and communicated more clearly. Here's my attempt at describing CDCL in simpler terms:
"it" is the resolver. As @uranusjr put it, the trick is figuring out "things that conflict" and efficiently detecting when that re-occurs. |
Do you reckon #84 handles this? |
I'm reading through Wikipedia's example of CDCL and it has a step analogous to what I wanted to implemented as part of #84: "Step 14 - Non-chronological back jump to appropriate decision level" I believe this could make an even bigger difference on reducing the amount of backtracking, but the complexity looked to be too much to implement as a first PR. And I found that implementing just the part of preferring failure causes was good enough for almost all real world examples I could find. |
I've been thinking about this further today and I would say the optimization I've implemented as part of #84 is only effective for what I would call "shallow conflicts", i.e. conflicts where by focusing on the requirements of the known failures at the current position in the dependency tree you are able to resolve those conflicts and find a solution for all the requirements without significant backtracking up the dependency tree. However I would call a "deep conflict" where you have found failures but to find a solution it requires significantly backtracking up the dependency tree because those failures were pinned many rounds ago. An example of this is "airflow[all]==1.10.13". My gut feeling is there are strong techniques that can tackle these problems. But let's see if they are reported first? |
I've been mulling on this idea and I think I have devised a simple(ish) approach to implementing an analogous step to the "Non-chronological back jump" in CDCL and how that might be used to speed up resolution. Once #84 and pypa/pip#10481 land I'm going to start experimenting and see if I can actually implement it and if there are really use cases it can improve. By definition it will actually change the behavior of the resolver itself, not just give the downstream library the opportunity to choose a different path, so if I can implement it and show it helps in some cases it's still probably going to take even more agreement that it's a good approach. |
I've started work on this "Non-chronological back jump" and even though I've only implemented a naive solution I've already had very good success with it, I have 2 primary tests (both happen to be Python 3.7 only): Test 1 (runs dozens of times faster with back jumps):
Test 2 (actually able to achieve ResolutionImpossible after ~5-10 mins [using cache] with back jumps):
For those curious here is the output of the apache airflow resolution impossible (click to expand)
The idea of this approach is that if the failure causes have been previously pinned beyond the most recent pinning then to "back jump" to the state where they were last pinned and attach the new backtrack causes to that state. This allows the failure causes to all be focused on together and means the resolution doesn't get stuck backtracking on some intermediate pinned requirement. It may also require an extra step of deciding if not all the backtrack causes are currently available to pin then pin the non-backtrack causes until they are, but I have not experimented with this yet. In general this approach is a lot more tricky and potentially prone to infinite loops if not careful, but I think worth further exploration and I will continue to do so. |
This failing doesn't make sense -- there is a solution for this in the space that the resolver is exploring? Doesn't |
Trying this command on my experimental version of Pip, as well as 21.3 and 21.2.4 I get the error for all three:
|
Hah, interesting! |
Yes, apache-airflow 1.10.13 can not actually pass the new resolver due to internal conflicts. From my understanding they spent a ton of time cleaning this up to make 2.0 (1.10 was the last 1.x release) work without warnings on the legacy resolver. |
Yes, I was a heavy users of apache-airflow when pip 20.3 was released which is what got me started trying to understand how pip's resolver works. Why I'm using |
Actually @pradyunsg is correct, I think I am hitting a bug in how Click to expand constraints that will allow `airflow[all]==1.10.13` to be installed on Linux for Python 3.7
|
That might be what #91 is about as well. |
FWIW, is pip check happy with the final resolution result for airflow? |
Yes, going to be watching that issue, if a PR is developed I will test it against this use case.
Yes. |
Remember that |
Thanks for the info, I'll manually check if the extras are all also satisfied. But I think they are because I can pass the constraints file I generate from pip freeze in the experimental branch I have to pip 21.3 and it to install fine. |
I've not had a lot of time to work on this and since pip 21.3 was released no one has reported any very significant backtracking times to the pip tracker so there hasn't been a ton of motivation. But a quick update on what I've tried so far and discovered: I implemented backjumping by removing the previous states until the causes of the conflict are removed and then applying the preference order of which package to pin next. In the case of
I might experiment with the following changes:
However I had another idea that I think compared to backjumping would be less disruptive and potentially yield most of the benefit with almost none of the downside: Rearrange the order in which the packages have been pinned before backtracking. So let's say you've pinned packages 1 to 10 but when you try to pin package 11 it conflicts with packages 2 and 8, the idea would be to reorder (as much as possible while not breaking the order derived from the dependency graph) the current pinning from It could also be the provider that tells resolvelib what the order should be, so any of the preferences can be controlled by the downstream system. This is the approach I am most excited for and interested in and therefore when I get time will try working on unless anyone else has specific input on any of these. |
This could be solved by #113 |
Agreed, I think in general it should be a big improvement to backtracking situations! I'm going through some known problematic requirements (including the ones I mention in this comment pypa/pip#10481 (comment)) to try and find any that break or get much worse. I have already commented on the PR. |
Hey all, we've ported resolvelib to Rust (https://github.com/prefix-dev/resolvelib-rs) but for resolving conda packages still run into some performance issues that we don't see with |
Note that in the current implementation of resolvelib it's quite important what the downstream implements, in particular for real world dependency graphs I suspect there will be a big difference between prioritizing or not prioritizing the causes of the conflict in the |
@notatallshaw thanks for the note, indeed for the one problematic case we already found a significant speed up by implementing the same preference you proposed in pypa/pip#10479 . It went from 50 secs to 19 seconds (which is still unacceptable since libsolv only takes <1 sec) We'll do further research. Ideally I think the preference should not be necessary? Let's see. We'll also double check which heuristics libsolv is using :) |
Just to confirm, you are aware that a key aspect of resolvelib is that it doesn't need to know the full dependency graph in advance? Because it can't make that assumption, comparing with libsolv (which does require the full graph up front to construct the SAT problem, I believe) needs to be done with caution, as you're not comparing like with like. |
Well Pip is dealing with graphs that are not known ahead of time, there is no giant file of every single dependency you can download from PyPI (unlike say If you are in a situation where you do know the entire graph ahead of time may I suggest that you investigate if That said I would love to see work on a real CDCL implementation for resolvelib, so if you go that route it would be great to see if the work can be backported. |
I wasn't aware :) thanks for the clarification & makes sense. I am not sure that "the unknown part" is in the way of CDCL (since the backtrackable part of the graph is known for both pip and conda). I also think that libsolv "explores" the graph in a similar way as resolvelib does (libsolv does indeed preload all the repository data). I am not an expert on these matters and don't want to make anyone mad! We've been super happy with the design of resolvelib and building the first prototype in Python took less than ~half a day (which motivated us to attempt a Rust port). If we make any progress with CDCL I'll definitely let you know, and I hope that my messages aren't coming across as hostile :) |
Not to me at least, the more places This conversation did trigger a thought on a performance improvement from an issue I remembered from awhile back and I filed an issue #131 |
Also regarding using a more formal SAT library I am not sure they would work easily. We're not only solving a SAT problem, but more like a MaxSAT problem as far as I understand (since we also try to maximize version numbers). I looked at a couple of solvers today (e.g. the CP-SAT solver from or-tools https://developers.google.com/optimization/cp/cp_solver) or https://github.com/shnarazk/splr. I think the ortools solver could work (also for the maximization problem) but ideally we'd have a smaller thing that we can fully understand for the problem we're solving (package management). |
Far from it! It's great to see other uses of resolvelib, and it's also nice to get feedback from other implementations. There's a lot of difficult problems in this area, and extra perspectives always help 🙂 |
If you have any time at all I would be curious if this change improves your performance at all #132 ? It moves the responsibility of preferring backtrack causes to |
Update: we ended up using CDCL after all (we ported libsolv to Rust, and I wrote an introductory blog post explaining the CDCL approach to dependency resolution). Maybe @wolfv can tell you about how he plans to make it work for PyPI and not only Conda channels |
I think the main question here will be how do you the handle the cost of potentially having to download and build a package each time the resolver wants to discover the dependencies of a node on the dependency graph? This is the requirement for resolvelib to build the graph iteratively and not just download the whole graph ahead of time. Reading your blog and the linked paper to confirm I wasn't obviously missing anything, my understanding is traditional SAT solvers work well with conda because all the dependency information can be known ahead of time (I understand in practice for any given resolution in conda you might be downloading up to 4 files, and perhaps streaming the json lazily to save memory and/or CPU cost). So individual steps of checking boolean satisfiability don't need to worry about the cost of getting that information from the dependency graph. Or perhaps I'm missing something and you can build the graph iteratively in a natural way using your approach. |
Hi @notatallshaw we're definitely not sure yet, how to do it. Obviously the fact that many Python packages ship static metadata in wheel files is a big plus for us, so the worst case solution would be to build some sort of static index that only works with wheel files. However, as the name of CDCL implies, clauses can (and are) added on the fly already (learnt clauses). Potentially this can be generalized to support adding dependency clauses on the go as well. PubGrub seems to have this figured out, and conceptually that solver is not so far away from ours (the key difference is that PubGrub quite strongly suggests using ranges for version constraints, while we don't really care about the shape of the constraint). I've also asked ChatGPT about this problem, and it came up with a bunch of existing "iterative" SAT solvers (ISAT, CryptoMiniSAT, or Glucose). Wether this is true I can't judge yet, I looked at CryptoMiniSAT for a second, and it seemed to indeed support adding clauses on the fly. This makes me hopeful that we could also find some theoretical background for iterative SAT solving :) Current status of this project is: working on the parts to retrieve metadata from PyPI's API. Once we've wired things up will let you know about the progress! |
Well in terms of static metadata from wheels I will point your attention to PEP 658 / 714 which is live for new packages on Pypi, and my understanding is will be backfilled: https://discuss.python.org/t/pep-658-714-are-now-live-on-pypi/26693 FYI my understanding is that Poetry's resolution algorithm is based on PubGrub and while it works well for most cases, at least the way Poetry uses it it does have lots of real world situations that it gets stuck resolving for hours. But I don't keep a super close eye on Poetry, maybe things have got a lot better. Either way they do have a CDCL SAT solver that works with Pypi, so if you haven't perhaps worth looking at their implementation. |
Yep, those pep's are good news and I've been testing them out :) |
How would it work? The comment lacks so much details. |
Hey @uranusjr - we are now very close to sharing the complete tooling with the world (probably end of this week). Our new thing (called Hopefully the talk at PackagingCon can shed some additional light on how we achieved this. We'll also have some blog posts etc. But the high level is that you can provide your own packages and dependency matching mechanism to resolvo to resolve packages. We'll come back asking for more feedback once the |
If you want to have a look: https://github.com/prefix-dev/rip – would love to hear your feedback & thoughts! |
Haven't been able to test it's performance yet as it seems to be getting tripped up easily over non-compliant requirements, or at least things it thinks are non-compliant, of real world packages, I filed some issues: |
Hey @notatallshaw ! Thanks for giving rip a try, I appreciate the effort! We'll take a look at these issues and fix them soon! |
Looking forward to this getting sdist support -- once that's implemented, I expect we'd have a bunch of things from that to either directly reuse or borrow from it! |
I'm starting to feel confident about my approach to favor conflicting causes, which, I believe, will ultimately complete resolvelib + pip's backtracking search as a truly CDCL-style algorithm. I'm fairly sure resolvelib cannot implement this on its side because it can't guarantee enough about the requirements objects. While it could be implemented via Therefore, I propose an additional step in resolvelib that allows the provider to filter unsatisfied names before getting a preference: #145. I've created an experimental branch of Pip: pip:prefer-conflicting-causes. So far, I've observed significant improvements:
I plan to conduct more testing with additional pathological requirements, both manually and, if possible, by converting some of the requirements into scenarios using pip-resolver-benchmarks. |
This (run against pypa/pip@fd8ddb6) would result in the resolver backtracking on setuptools, but these backtracks are obviously useless since setuptools does not have dependencies.
The resolver should be able to “look ahead” and see that trying other versions of setuptools would not change the conflicting graph, and skip working on them altogether. This applies not only to packages without dependencies (although they are the easiest target), but also versions that have the same dependencies. The idea of “the same dependencies” however require provider support (a new hook
are_dependencies_equal
or something) so those are more difficult to implement.The text was updated successfully, but these errors were encountered: