Replies: 2 comments
-
@OceanPang can you please clarify the usage of the inference API? |
Beta Was this translation helpful? Give feedback.
0 replies
-
I'm not involved in the development of this project. However, I believe the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I'm able to run the test script with the trained model and obtain the inference results for BDD100K. However, I'm unable to figure out how to run inference for a single image in the dataset.
I see there are inference APIs available in
qdtrack/apis/inference.py
. I'm able to initialize the model by callinginit_model
, however, I'm unsure what to pass as frame_id for callinginference_model(model, imgs, frame_id)
Any help will be highly appreciated. Thank you.
Beta Was this translation helpful? Give feedback.
All reactions