An AI/ML-powered sign language detection and translation system developed as part of my MCA final year project. The system utilizes computer vision techniques (CV2) and machine learning algorithms to learn and recognize sign language gestures, enabling communication between individuals who use sign language and those who don't.
Key Features:
- Training: Train the system on sign language gestures captured using computer vision techniques.
- Detection: Real-time detection of sign language gestures using a trained machine learning model.
- Translation: Translate detected sign language gestures into text for further processing and analysis.
- Accessibility: Facilitate communication by bridging the gap between individuals who use sign language and others who may not understand it.
Future Scope:
- Text-to-Speech: Enhance the system with a text-to-speech feature, converting the translated text into spoken words.
- IoT Integration: Implement the system as an Internet of Things (IoT) device for portable and on-the-go sign language translation.
- Expansion of Gesture Library: Extend the system's capabilities by adding more sign language gestures to the training dataset, enabling recognition of a wider range of signs.
This repository contains the source code, Jupyter notebook, and resources required to train and deploy the sign language detection system.