-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No installation available of torch_tensorrt for Jet Pack 6.1 #737
Comments
I have been stuck with this as well. Using I get this error (it seems to try to find cuda 12.4 although JP61 comes with 12.6):
Next I uncommented all the torch_tensorrt build related code from the Dockerfile you posted. If I check out the main branch from torch_tensorrt I get the following errors:
|
@urbste @alexoterno torch_tensorrt typically requires updates to the build scripts, you may be able to find them in that repo if they have been updated for JetPack 6.1. torch2trt is lighterweight to install but functionally similar - I end up using that more frequently (like in github.com/dusty-nv/clip_trt) and that still is working on the latest. |
@dusty-nv thanks for you response. Yeah I thought torch_tensorrt was more actively developed and in addition I was using the But then I might give torch2trt a try :) |
Would it maybe work to run the build script with |
I remember trying torch_tensorrt previously with jp6.1, it had not yet been updated upstream, and I just went back to using torch2trt. I would say torch_tensorrt is good for production, it has broader support. torch2trt originated out of the jetson group (jetbot, nanoOWL, ect) having been a precursor to torch_tensorrt. Those libraries though that require custom update steps have been difficult to keep updated though. |
Hi,
I want to create an docker image with PyTorch, Torchvision, ONNX Runtime GPU and torch_tensorrt to use TensorRT on the Nvidia Jetson Orin Nano with Jet Pack 6.1.
From your latest docker image name dustynv/l4t-pytorch:r36.4.0, I have managed to install:
for cuda 12.6 and Python 3.10.
I run some tests and I'm able to run some torch models or onnx models, both, in the CPU and the GPU.
However, I was not able to install torch_tensorrt following and adapting your code from this Dockerfile for Jet Pack 4.6.
Have some insights to install torch_tensorrt on Jet Pack 6.1 ?
Moreover, I saw torch2trt is already installed in the dustynv/l4t-pytorch:r36.4.0 docker image. I did not really understood the difference between torch2trt and torch_tensorrt. It looks like torch2trt is for edge devices but it is not updated since 8 months. Which one is better to use for Nvidia Jetson Orin Nano with Jet Pack 6.1 and which one has the lowest inference time latency?
Thanks for sharing you docker images
The text was updated successfully, but these errors were encountered: