You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am unable to run yolo v11 inference with these versions, it fails with:
File "/ultralytics/ultralytics/engine/model.py", line 558, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File "/ultralytics/ultralytics/engine/predictor.py", line 173, in __call__
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 50, in generator_context
response = gen.send(None)
File "/ultralytics/ultralytics/engine/predictor.py", line 266, in stream_inference
self.results = self.postprocess(preds, im, im0s)
File "/ultralytics/ultralytics/models/yolo/detect/predict.py", line 25, in postprocess
preds = ops.non_max_suppression(
File "/ultralytics/ultralytics/utils/ops.py", line 291, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
File "/usr/local/lib/python3.8/dist-packages/torchvision/ops/boxes.py", line 40, in nms
_assert_has_ops()
File "/usr/local/lib/python3.8/dist-packages/torchvision/extension.py", line 48, in _assert_has_ops
raise RuntimeError(
RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.
Does it work for other people? Are there workarounds? Should the torch version be updated for this L4T version?
The text was updated successfully, but these errors were encountered:
According to
https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform-release-notes/pytorch-jetson-rel.html#pytorch-jetson-rel
https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048
https://developer.download.nvidia.com/compute/redist/jp/v512/pytorch/
NVIDIA recommends
torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
package for L4T 35.4.1, Jetpack 5.1.2.However, image
dustynv/l4t-pytorch:r35.4.1
hasI am unable to run yolo v11 inference with these versions, it fails with:
Does it work for other people? Are there workarounds? Should the torch version be updated for this L4T version?
The text was updated successfully, but these errors were encountered: