-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Support ONNX and TensorRT export #4
base: main
Are you sure you want to change the base?
Conversation
TensorRT export logs and statistics - trt.log
TensoRT is more sparse for almost similar poses but generally seems to have the same performance if not better. Device: Nvidia Jetson AGX Xavier - Jetpack 5.1 |
Hello @IamShubhamGupto and @acai66, thank you so much for your contributions! I notice that both of you are providing an ONNX export, and it probably doesn't make sense to merge both since they serve the same purpose. Do you have any suggestions on how to proceed? In my opinion, it makes sense to choose the one that supports the earliest operator set, as it would maximize the model's compatibility with the greatest number of devices and libraries |
sure, maybe me and @acai66 can work on collaborating our work. In summary, there is no specific operator set defined in this PR and is up to the user to provide one, by default it is the latest supported and that is 17. Users can choose to use an older op set such as 13 by providing command line arguments.
Another key difference in our PRs is @acai66 can quantize the Let me know what you guys think |
Thank you for your suggestion. I concur with your viewpoint and believe that merging our work would indeed be a beneficial course of action. |
My PR is aimed at improving the convenience and compatibility of deployment, especially in environments without PyTorch. I've uploaded an example of using the ONNX Runtime Python API, which can be easily reimplemented using the ONNX Runtime C++ API. |
that's amazing, we can then find some time this week to start merging the examples. I'll give your editor access to my fork. |
Fantastic, @IamShubhamGupto and @acai66! While I'm not experienced in ONNX, I'm here to help with anything related to the original code — just let me know. |
Thank you very much for the invitation. I am concerned that I have little experience with TensorRT deployment. I have invited you to be a collaborator on the forked repository. |
I apologize for my poor English; I am using translation software to communicate. |
@acai66 for now I am pausing development on my branch. we will merge onnx export from your PR and then I will continue TensorRT development here. I will review and commit changes on your PR in the next couple of hours. |
Hello @guipotje and team,
In this PR, I am adding support for exporting ONNX and TensorRT models. On running the TensorRT engine, we can observe improvement in FPS. Since the evaluation code is currently unavailable, I cannot quantify the accuracy but it feels similar to the provided
xfeat.pt
file performance. The following is currently supported and changed:realtime_demo.py
with enginerealtime_demo.py
backwards compatible withcv2
version4.5.0
README.md
for TensorRT exportWe can possibly further improve FPS by simplifying the onnx export using
onnx-simplier
but that can be an iterative PR.I will be attaching logs and performance observations in comments below. Let me know if there any changes required. Thank you again for the amazing work!