I have trained a model that is classifying the hand gesture captured by the camera into the ASL(American Sign Language).
- Image is taken via inbuilt webcam of the system.
- We see a frame around the hand
- Image is converted from BGR to HSV
- Hand is extracted from the image
- The gesture is passed to the model
- Model predicts the hand gesture and prints the predicted letter along with accuracy percentage.
Dataset used here was taken from the kaggle. You can access the dataset from the link.
You have to adjust the converter and other values accordingly.