Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NVGesture Processing #30

Open
aamzhas opened this issue Jul 17, 2023 · 2 comments
Open

NVGesture Processing #30

aamzhas opened this issue Jul 17, 2023 · 2 comments

Comments

@aamzhas
Copy link

aamzhas commented Jul 17, 2023

Hi, I was going through your processing for NVGesture and wanted some clarification regarding a function call.

I noticed that in nvidia_process.py:31 the uvd2xyz_sherc() function is called rather than uvd2xyz_nvidia(). The differences between the two functions being that the parameters for f and the image centers change. Would this mean that the processing was incorrect?
Am I misunderstanding something? Any help would be appreciated. Thank you!

@ycmin95
Copy link
Collaborator

ycmin95 commented Jul 22, 2023

Yes, it seems there is a typo here, and the corresponding function should be uvd2xyz_nvidia() , thanks for your correction!

During our experiments, we didn't notice significant performance difference between image coordinate and camera coordinate systems (and we use the default intrinsic matrix of camera). And most experiments are conducted under the image coordinate system as shown in dataloader. However, I believe using the camera coordinate system should be more robust in real-world applications, and it would be a valuable research direction to increase the robustness of point-cloud based method.

@NOTGOOOOD
Copy link

Yes, it seems there is a typo here, and the corresponding function should be uvd2xyz_nvidia() , thanks for your correction!

During our experiments, we didn't notice significant performance difference between image coordinate and camera coordinate systems (and we use the default intrinsic matrix of camera). And most experiments are conducted under the image coordinate system as shown in dataloader. However, I believe using the camera coordinate system should be more robust in real-world applications, and it would be a valuable research direction to increase the robustness of point-cloud based method.

Hi, I am confused for converting the point-cloud to 3D space using uvd2xyz_***().
I didn't really find out at what a stage the 3D information works.
It seems that point-cloud in training and testing stage both are [batch_size, T, N, 4] by here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants