Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First-person perspective, right-hand dataset and model for recognizing static hand poses. #1

Open
dev-td7 opened this issue Jan 7, 2019 · 0 comments
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed

Comments

@dev-td7
Copy link
Owner

dev-td7 commented Jan 7, 2019

Currently, the pre-trained model in this repository is capable of recognizing hand poses from a second-person perspective. Meaning, an image taken by you of some other person making a hand pose is the ideal image for recognition.

Need to create a new model and dataset of images of the same hand poses taken from a first-person perspective, i.e. images taken by your camera held by your left hand of your right hand. Refer to the reference image provided in the repository for Indian Sign Language Hand Poses.

You need to create a dataset first. 300 images per label is the ideal number per image. Use the training scripts in the train/ folder to create a new model.

What are hand poses?
Static signs made by your hand - imagine you representing the number '5' from your hand is called a hand pose.

@dev-td7 dev-td7 added help wanted Extra attention is needed good first issue Good for newcomers enhancement New feature or request labels Jan 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant