-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training new data #7
Comments
I believe that those are the Y axis positions of the middlePIP_left keypoint through the whole video (take a look at this to see the codification of hands' keypoints). Take into account that some other variables, like labels and height, appear as a independent variables, not inside lists. However, I don't understand the order of the variables in the header section. |
I think so,but this should be the feature extracted from coordinates.now I don't understand why their dimensions are different. |
Let's make a minimal example: For each frame estimations for, each pair let's say, $$ left.wrist_x = [ x \ : \ \exists (x, y, confidence){i, left.wrist}, \ i \in [1, n{frames}] ] $$ $$ left.wrist_y = [ y \ : \ \exists (x, y, confidence){i, left.wrist}, \ i \in [1, n{frames}] ] $$ $$ right.wrist_x = [ x \ : \ \exists (x, y, confidence){i, right.wrist}, \ i \in [1, n{frames}] ] $$ $$ right.wrist_y = [ y \ : \ \exists (x, y, confidence){i, right.wrist}, \ i \in [1, n{frames}] ] $$ As you can see, each of these objects have a length of In your particular example, however, the list of zeros is the codification of an uncorrect estimation, i.e., if your confidence is lower than a given threshold, the Pose Estimation Methods will return Hope this helps! |
Thank you,that‘s great explanation. |
You're welcome, thanks! |
Thanks both — especially @RodGal-2020 for the explanation for others members in the thread during my absence. We will be providing more options of pose estimator formats like OpenPose, MMPose, and MediaPipe — including example code — very soon. This will include code for conversion from standardized formats. Please stay tuned; I will update you here. For now, you can convert the data yourself (as suggested above) or obtain it (directly in the supported format) using Vision API, which was used in our original work (WACV'22 paper). To do so, you can use our Pose Data Annotator app (Mac App Store, GitHub with Swift code). |
请问一下应用程序是mac版本的吗,由windows可以使用的版本吗 |
|
Hi, I like your work. I want to make new data, but I don't understand the structure of your skeletal data. Are you willing to teach me ?
The text was updated successfully, but these errors were encountered: