-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is jetpack 6.0 for jetson agx orin supported? #3049
Comments
IIRC Jetpack 6.0 is still on 8.6 so you may be able to build an older version of TorchTRT (like 2.2) or the NGC-iGPU branch shipped in the NGC containers https://github.com/pytorch/TensorRT/tree/release/ngc/24.07_igpu. You can also use the containers for Jetson directly which already have Torch-TRT installed https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch |
It would appear that the PyTorch NGC containers do not work on Jetson. Using Dusty's docker image
|
iirc you can use this container: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags. The iGPU tag I believe is targeted at jetson cc: @apbose |
EDIT: When attempting to run a model from the HuggingFace Transformers library (OpenVLA), this error occurs. It appears to be a flash attention related issue, but I also compiled flash attention from source: Here is the entire stack trace: |
Does flash attention have any standalone tests you can run to verify your build? |
I tried installing torch_tensorrt using jetpack 5.0 WORKSPACE script but it did not work for my system which is currently using jetpack 6.0 on the jetson agx orin
The text was updated successfully, but these errors were encountered: