Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking issue: package without source distribution gets an "incomplete" resolution #9711

Open
charliermarsh opened this issue Dec 7, 2024 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@charliermarsh
Copy link
Member

charliermarsh commented Dec 7, 2024

Using TensorFlow as an example: if you lock against the CPU index, we choose 2.5.2+cpu. However, 2.5.2+cpu doesn't have any macOS wheels. So the resulting resolution doesn't work on macOS at all -- you get something like:

error: distribution torch==2.5.2+cpu @ registry+https://download.pytorch.org/whl/cpu can't be installed because it doesn't have a source distribution or wheel for the current platform

(If a package-version doesn't have any compatible wheels for the Python requirement, then we skip it; but as long as it has at least one compatible wheel, we "accept" it for all Python versions and platforms.)

As a second example: for markupsafe==3.0.2, if you use the PyTorch CPU index, they only ship MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl. So we might pick that version, but then we'd be lacking wheels for anything other than Python 3.13 on Linux.

Solving this is quite difficult, but as evidenced by the linked issues, it's a common problem.

A solution might involve something like: determine the set of Python versions and platforms (coarsely-defined: Linux, macOS, Windows) covered by the wheels. If some Python versions or platforms aren't satisfied, we have to fork and look for older / other versions.

@charliermarsh charliermarsh added the resolver Related to the package resolver label Dec 7, 2024
@charliermarsh
Copy link
Member Author

One hard thing here is that it requires us to define a set of "required" environments... Like, if we don't see any macOS wheels, then we need to look for a different version of the package that does have macOS wheels. But do we need to find both macOS ARM and x86 wheels? Or is one of the two sufficient?

(In the case of PyTorch, if we require x86 wheels, we'll always fail!)

@charliermarsh charliermarsh self-assigned this Dec 15, 2024
@charliermarsh
Copy link
Member Author

For example, we could say:

  • We need a wheel for each supported Python version...
  • For each platform...

But ignore architecture? (This would be a heuristic, but so would anything.)

Even that first condition seems a little tricky, since we don't know what the "max" Python version is. So maybe we'd do like: look at the latest supported version (in the wheels), and then make sure there are wheels for all prior versions? Or we just ignore Python version entirely and focus on on platform.

@konstin
Copy link
Member

konstin commented Dec 18, 2024

I've tried to categorize the linked issues:

old mac torch #9988 #7557 #8536 #9228

The expectation here is that the resolver should pick PyTorch version 2.2.2 for Intel MacOS, and a different version (e.g., 2.5.1) for other platforms. However, the current behavior enforces a single version (2.2.2) across all platforms.

Similarly, odrive #8536, where a user would like older mac os support. #8536 (comment) is a syntax for solving this, but there's no clear connection between the toml and the problem/solution. For solving this overall, there are two possible intents: I want a separate resolution for platform_system == 'Darwin' and platform_release <= '20.6.0', and i want (ideally) one resolution that has wheels that support platform_system == 'Darwin' and platform_release <= '20.6.0'.

#8536 (comment)

torch_scatter #9646

From https://data.pyg.org/whl/torch-2.5.1+cpu.html:

torch_scatter-2.1.2+pt25cpu-cp310-cp310-linux_x86_64.whl
torch_scatter-2.1.2+pt25cpu-cp310-cp310-win_amd64.whl
torch_scatter-2.1.2+pt25cpu-cp311-cp311-linux_x86_64.whl
torch_scatter-2.1.2+pt25cpu-cp311-cp311-win_amd64.whl
torch_scatter-2.1.2+pt25cpu-cp312-cp312-linux_x86_64.whl
torch_scatter-2.1.2+pt25cpu-cp312-cp312-win_amd64.whl
torch_scatter-2.1.2+pt25cpu-cp39-cp39-linux_x86_64.whl
torch_scatter-2.1.2+pt25cpu-cp39-cp39-win_amd64.whl
torch_scatter-2.1.2-cp310-cp310-macosx_10_9_universal2.whl
torch_scatter-2.1.2-cp311-cp311-macosx_10_9_universal2.whl
torch_scatter-2.1.2-cp312-cp312-macosx_10_13_universal2.whl
torch_scatter-2.1.2-cp39-cp39-macosx_10_9_universal2.whl

torch #5182

Similar to torch_scatter, except that the wheel tags overlap. Python 3.10 only for simplicity:

torch-2.3.1+cpu-cp310-cp310-linux_x86_64.whl
torch-2.3.1+cpu-cp310-cp310-win_amd64.whl
torch-2.3.1+cpu.cxx11.abi-cp310-cp310-linux_x86_64.whl
torch-2.3.1+cu118-cp310-cp310-linux_x86_64.whl
torch-2.3.1+cu118-cp310-cp310-win_amd64.whl
torch-2.3.1+cu121-cp310-cp310-linux_x86_64.whl
torch-2.3.1+cu121-cp310-cp310-win_amd64.whl
torch-2.3.1+rocm5.7-cp310-cp310-linux_x86_64.whl
torch-2.3.1+rocm6.0-cp310-cp310-linux_x86_64.whl
torch-2.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
torch-2.3.1-cp310-none-macosx_11_0_arm64.whl

pyqt5 #7005 #8603

I did some digging and found that pyqt5 depends on a package called pyqt5-qt5 which does not support Windows in versions 5.15.11 - 5.15.14, but does support Windows in 5.15.2.

5.15.2 has all binaries except arm mac, which only exist on more recent releases.

markupsafe #9647

The torch index' mirroring is incomplete, https://pypi.org/project/MarkupSafe/#files has all versions and a source distribution while https://download.pytorch.org/whl/markupsafe/ does not.

@charliermarsh
Copy link
Member Author

I care (by far) the most about solving the torch and torch_scatter issues, since they're the most popular by far.

The markupsafe issue I feel less strongly about... Even the current PR doesn't really solve it, since it doesn't add other Linux wheels beyond the one that's on the PyTorch index.

The odrive and "old" torch issue I also feel less strongly about. It'd be nice to have better error messages for this... But I think it's probably "correct" to require some user intervention.

@charliermarsh
Copy link
Member Author

The idea that @konstin and I are discussing is to only apply this for local versions.

@charliermarsh
Copy link
Member Author

Sorry, the other example to mention here is #9928 (comment). Our current transformers[all] resolution is subtly "wrong" because it chooses a tensorflow-text version that doesn't have any Windows wheels. With forking, we select an older version for the entire Windows subtree (not just for tensorflow-text, but for tensorflow too, since they're coupled). It's not clear if that's desirable.

@charliermarsh
Copy link
Member Author

We shipped #10046 which fixes this for PyTorch and packages in the PyTorch ecosystem (i.e., it's limited to inspecting local versions vs. base versions).

We may return to this to make it more general in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants