You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, our search method calculates the Euclidean distance between live hand pose data and each recorded hand pose for that handedness. In one respect this is excellent because it yields a full list of search results which can then be sorted by distance and every single pose has a calculated (potentially useful) result. However, as the pose library grows the search routine becomes more cumbersome and time consuming. Array-clutching coupled with the fact that we don’t require a full search results list per frame for a good user experience has mostly ameliorated the problem thus far—but I think I’m starting to feel the detection lag and I think we can do better.
Proposal:
Let’s say Handy requires that our pose library contain at least one specific pose—for example, the “at rest” pose—and that all other pose objects in the library include the Euclidean distance between themselves and this “at rest” pose. We then take the live hand pose data and get the distance, d, between it and the “at rest” pose. We can now immediately see (without calculating further distances) what poses in the library might be similar to the live pose by just looking at each recorded pose’s d property. If the live pose is very distant from the “at rest” pose, then we can eliminate poses that are similar to the “at rest” pose from the actual search; we only need to search through recorded poses with a d similar to the live pose’s d.
Perhaps we can further compound this efficiency by requiring two poses in the library instead of one, and pre-calculating the distances between these required poses and all the others? (There must be diminishing returns here, however.)
The text was updated successfully, but these errors were encountered:
Currently, our search method calculates the Euclidean distance between live hand pose data and each recorded hand pose for that handedness. In one respect this is excellent because it yields a full list of search results which can then be sorted by distance and every single pose has a calculated (potentially useful) result. However, as the pose library grows the search routine becomes more cumbersome and time consuming. Array-clutching coupled with the fact that we don’t require a full search results list per frame for a good user experience has mostly ameliorated the problem thus far—but I think I’m starting to feel the detection lag and I think we can do better.
Proposal:
Let’s say Handy requires that our pose library contain at least one specific pose—for example, the “at rest” pose—and that all other pose objects in the library include the Euclidean distance between themselves and this “at rest” pose. We then take the live hand pose data and get the distance, d, between it and the “at rest” pose. We can now immediately see (without calculating further distances) what poses in the library might be similar to the live pose by just looking at each recorded pose’s d property. If the live pose is very distant from the “at rest” pose, then we can eliminate poses that are similar to the “at rest” pose from the actual search; we only need to search through recorded poses with a d similar to the live pose’s d.
Perhaps we can further compound this efficiency by requiring two poses in the library instead of one, and pre-calculating the distances between these required poses and all the others? (There must be diminishing returns here, however.)
The text was updated successfully, but these errors were encountered: