You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have successfully used the PSI documentation to capture RGB frame data from the HoloLens 2 and send it to a Python server over TCP. The approach works well, where I convert the RGB frame data into bytes and transmit it.
Now, I’m attempting to do the same for hand pose data. Specifically, I am trying to extract hand joint coordinates from PSI and send them over TCP to my Python server, but I’m facing difficulty. I’m looking for an equivalent method to the following, which I used for extracting RGB frames:
N.B.: To make the RGB frame capture work, I modified the code at L449 in HoloLensCaptureServer.cs , and it successfully started sending RGB frames to the server.
This works for capturing the RGB frames. However, I am unable to figure out how to extract the actual hand pose data (specifically the joint coordinates) from PSI. I’ve tried using the Microsoft.Psi.MixedReality.StereoKit.HandsSensor class, but I couldn’t find a way to access the joint coordinates.
Could you please guide me on how to extract the hand pose data from PSI, similar to how I am extracting RGB frames? Any pointers or code examples would be greatly appreciated.
Thank you!
The text was updated successfully, but these errors were encountered:
You could look at CaptureTcpStream<T> on the server side, which is the generic method for capturing all streams from the HoloLens. One of those streams will have the name "Hands" and T will be (Microsoft.Psi.MixedReality.OpenXR.Hand, Microsoft.Psi.MixedReality.OpenXR.Hand)
Once you have the stream, you can do whatever you want with it. For example, if you name the stream handsStream, you could do something like:
I have successfully used the PSI documentation to capture RGB frame data from the HoloLens 2 and send it to a Python server over TCP. The approach works well, where I convert the RGB frame data into bytes and transmit it.
Now, I’m attempting to do the same for hand pose data. Specifically, I am trying to extract hand joint coordinates from PSI and send them over TCP to my Python server, but I’m facing difficulty. I’m looking for an equivalent method to the following, which I used for extracting RGB frames:
N.B.: To make the RGB frame capture work, I modified the code at
L449
in HoloLensCaptureServer.cs , and it successfully started sending RGB frames to the server.This works for capturing the RGB frames. However, I am unable to figure out how to extract the actual hand pose data (specifically the joint coordinates) from PSI. I’ve tried using the Microsoft.Psi.MixedReality.StereoKit.HandsSensor class, but I couldn’t find a way to access the joint coordinates.
Could you please guide me on how to extract the hand pose data from PSI, similar to how I am extracting RGB frames? Any pointers or code examples would be greatly appreciated.
Thank you!
The text was updated successfully, but these errors were encountered: