Skip to content

This project aims to develop a sign language detection system using computer vision and machine learning techniques. The system is designed to interpret hand gestures captured through a webcam in real-time

Notifications You must be signed in to change notification settings

Priyanshuparth/HandSpeak-Visual-Gesture-Interpreter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HandSpeak: Visual Gesture Interpreter

GitHub repo size GitHub contributors GitHub stars GitHub forks Instagram Follow LinkedIn Follow

Overview

This project aims to develop a sign language detection system using computer vision and machine learning techniques. The system is designed to interpret hand gestures captured through a webcam in real-time and classify them into corresponding sign language gestures.

Features

  • Real-time hand detection and tracking using webcam input.
  • Classification of detected hand gestures into predefined sign language categories.
  • User-friendly interface for collecting training data through webcam feed.
  • Support for customization and extension with additional sign language categories.

Installation

  1. Clone the repository to your local machine:
    git clone [email protected]:Priyanshuparth/HandSpeak-Visual-Gesture-Interpreter.git
  2. Install the required dependencies:
    pip install -r requirements.txt
  3. Download the pre-trained model and label files (if available) and place them in the appropriate directories.

Usage

Run the main Python script to start the sign language detection system:

python main.py

Follow the on-screen instructions to interact with the system:

  • Use webcam input to capture hand gestures in real-time.
  • Press specific keys to exit the screen press q
  • To collect the images press s

Data Collection

To collect training data for the sign language detection model, follow these steps:

  1. Run the data collection script:
    python data_collection.py
  2. Follow the instructions to capture and save hand gesture images using the webcam.

Contributing

Contributions to this project are welcome! If you have ideas for improvements or new features, please submit a pull request or open an issue on GitHub.

License

This project is licensed under the MIT License.

Acknowledgements

  • The hand detection and tracking functionalities in this project are powered by the cvzone library.

Various Symbols

Image_1710090982 011793

Hello

Image_1710092547 575559

I love you

Image_1710092138 3236485

No

Image_1710092848 569423

Okay

Image_1710091462 9211955

Please

Image_1710093560 7241387

Sorry

Image_1710093215 559252

Thank you

Image_1710090729 4090722

Yes

Image_1710093656 7833636

Bye

Output

Screenshot 2024-04-03 212652 Screenshot 2024-04-03 212718 Screenshot 2024-04-03 212745 Screenshot 2024-04-03 212805 Screenshot 2024-04-03 212819 Screenshot 2024-04-03 212835 Screenshot 2024-04-03 212852 Screenshot 2024-04-03 212909 Screenshot 2024-04-03 212928

About

This project aims to develop a sign language detection system using computer vision and machine learning techniques. The system is designed to interpret hand gestures captured through a webcam in real-time

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages