[iOS] video vs livestream model coordinates #5780
Labels
platform:ios
MediaPipe IOS issues
task:hand landmarker
Issues related to hand landmarker: Identify and track hands and fingers
type:support
General questions
Hi team!
Have question about hand landmark detection model modes for iOS and outputs.
Now experimenting with two options to initialise HandLandmarker on iOS:
Landmarks output seems to differ for these two modes.
Here [https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker] mentioned different types of output for decoded video frames and live video feed as:
What is difference and is there a way to convert them between each other?
The text was updated successfully, but these errors were encountered: