Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong version of torch in l4t-pytorch:r35.4.1 #747

Open
aslafy-z opened this issue Dec 16, 2024 · 3 comments
Open

Wrong version of torch in l4t-pytorch:r35.4.1 #747

aslafy-z opened this issue Dec 16, 2024 · 3 comments

Comments

@aslafy-z
Copy link

aslafy-z commented Dec 16, 2024

According to

https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform-release-notes/pytorch-jetson-rel.html#pytorch-jetson-rel
https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048
https://developer.download.nvidia.com/compute/redist/jp/v512/pytorch/

NVIDIA recommends torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl package for L4T 35.4.1, Jetpack 5.1.2.

However, image dustynv/l4t-pytorch:r35.4.1 has

torch-2.0.0+nv23.5
torchvision-0.15.1a0+42759b1

I am unable to run yolo v11 inference with these versions, it fails with:

  File "/ultralytics/ultralytics/engine/model.py", line 558, in predict
    return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
  File "/ultralytics/ultralytics/engine/predictor.py", line 173, in __call__
    return list(self.stream_inference(source, model, *args, **kwargs))  # merge list of Result into one
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 50, in generator_context
    response = gen.send(None)
  File "/ultralytics/ultralytics/engine/predictor.py", line 266, in stream_inference
    self.results = self.postprocess(preds, im, im0s)
  File "/ultralytics/ultralytics/models/yolo/detect/predict.py", line 25, in postprocess
    preds = ops.non_max_suppression(
  File "/ultralytics/ultralytics/utils/ops.py", line 291, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "/usr/local/lib/python3.8/dist-packages/torchvision/ops/boxes.py", line 40, in nms
    _assert_has_ops()
  File "/usr/local/lib/python3.8/dist-packages/torchvision/extension.py", line 48, in _assert_has_ops
    raise RuntimeError(
RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.

Does it work for other people? Are there workarounds? Should the torch version be updated for this L4T version?

@dusty-nv
Copy link
Owner

@aslafy-z thank you, I will rebuild this for JetPack 5 with PyTorch 2.2 👍

@dusty-nv
Copy link
Owner

OK, this is built and pushed now to dustynv/l4t-pytorch:2.2-r35.4.1

@aslafy-z
Copy link
Author

aslafy-z commented Jan 8, 2025

@dusty-nv shouldn't it be torch-2.1.0a0 like the published version at torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants