Skip to content

ibhoomii/sign-language-recognition

Repository files navigation

🤟 Sign Language Recognition & Voice Translator A real-time Sign Language Recognition System that detects and translates hand signs into text and speech. Built using Python, Keras, and OpenCV, this application helps bridge the communication gap between the deaf and hearing communities.

📌 Features 📷 Real-Time Sign Detection

Uses webcam to detect and recognize American Sign Language (ASL) gestures

Recognizes signs like "Hi", "Thank You", and more (extendable)

🔤 Live Text Translation

Translates detected sign into readable English

Displays translated words on the screen instantly

🔊 Voice Output

Uses a text-to-speech engine to speak out translated text

Helps hearing people understand the gesture through audio

🤖 Machine Learning Powered

Trained with Keras using Convolutional Neural Networks (CNN)

Image preprocessing and gesture detection handled with OpenCV

🛠 Tech Stack Programming Language: Python

Libraries:

OpenCV (real-time image processing)

Keras / TensorFlow (gesture recognition model)

pyttsx3 or gTTS (text-to-speech)

NumPy, Pillow (image manipulation and matrix operations)

About

A machine learning-based project for real-time sign language recognition.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages