Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch._check_is_size is not being recognized by CoreML partitioner #9213

Open
Ray-Luo opened this issue Mar 13, 2025 · 2 comments
Open

torch._check_is_size is not being recognized by CoreML partitioner #9213

Ray-Luo opened this issue Mar 13, 2025 · 2 comments
Assignees
Labels
enhancement Not as big of a feature, but technically not a bug. Should be easy to fix module: coreml Issues related to Apple's Core ML delegation and code under backends/apple/coreml/ triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@Ray-Luo
Copy link

Ray-Luo commented Mar 13, 2025

🐛 Describe the bug

As titled.
torch._check_is_size(pos_x) is not working with CoreML partitioner, the workaround is to use torch._check(pos_x >= 0).

Versions

PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.3.1 (arm64)
GCC version: Could not collect
Clang version: 11.1.0
CMake version: version 3.31.4
Libc version: N/A

Python version: 3.10.0 (default, Mar 3 2022, 03:54:28) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-15.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Max

Versions of relevant libraries:
[pip3] executorch==0.5.0a0+6df7779
[pip3] executorchcoreml==0.0.1
[pip3] numpy==2.0.0
[pip3] torch==2.6.0
[pip3] torchao==0.8.0+gitebc43034
[pip3] torchaudio==2.6.0
[pip3] torchsr==1.0.4
[pip3] torchvision==0.21.0
[conda] executorch 0.5.0a0+6df7779 pypi_0 pypi
[conda] executorchcoreml 0.0.1 pypi_0 pypi
[conda] numpy 2.0.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchao 0.8.0+gitebc43034 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi

cc @kimishpatel @YifanShenSZ @cymbalrush @metascroy

@metascroy metascroy added module: coreml Issues related to Apple's Core ML delegation and code under backends/apple/coreml/ triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module enhancement Not as big of a feature, but technically not a bug. Should be easy to fix labels Mar 13, 2025
@metascroy metascroy self-assigned this Mar 13, 2025
@metascroy
Copy link
Contributor

@YifanShenSZ can you have a look?

@YifanShenSZ
Copy link
Collaborator

I don't think we have a conclusion on such dynamic shape assertions yet 🤦 Currently, Core ML simply ignores those assertions, so you can try add the assertions from your case to it

Long term solution still needs to be developed, since such treatment will get complained by ExecuTorch AOT: Simple removal of assertions leads to unhappy torch.export shape check

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Not as big of a feature, but technically not a bug. Should be easy to fix module: coreml Issues related to Apple's Core ML delegation and code under backends/apple/coreml/ triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants