This is a BTech Final Year Project focused on Sign Language Recognition.
The project uses Python, OpenCV, and Deep Learning to capture, process, and classify hand gestures.
Key functionality includes:
- Hand detection and histogram creation
- Gesture image collection and preprocessing
- Training and evaluating a CNN model for gesture recognition
- Real-time display of recognized gestures
- Capture hand gestures using a webcam
- Preprocess images and extract features using HSV color histograms
- Train a Convolutional Neural Network (CNN) for gesture classification
- Recognize and display gestures in real time
- Python
- OpenCV
- NumPy
- TensorFlow / Keras (for CNN model)
- Pickle (for saving hand histograms)
- Jupyter Notebooks
-
Hand Histogram Setup:
- Run
set_hand_histogram.ipynbto capture your hand and generate a histogram. - Press
Cto capture histogram andSto save and exit.
- Run
-
Create Gestures:
- Run
Create_gesture.ipynbto capture gesture images for the dataset.
- Run
-
Preprocessing:
- Use
Rotate_images.ipynbandload_images.ipynbto augment and load images.
- Use
-
CNN Training:
- Train the model using
cnn_modle_train.ipynb. - The trained model is saved as
cnn_model_keras2.h5.
- Train the model using
-
Display Gestures:
- Run
display_gesturers.ipynborfinal.ipynbfor real-time gesture recognition.
- Run
pip install opencv-python numpy tensorflow kerasOpen Jupyter Notebook in the gestures/ folder:
jupyter notebookRun the notebooks in the order:
- set_hand_histogram.ipynb
- Create_gesture.ipynb
- Rotate_images.ipynb
- cnn_modle_train.ipynb
- display_gesturers.ipynb or final.ipynb
- hist → Hand histogram file
- cnn_model_keras2.h5 → Trained CNN model
- Gesture images → stored in train_images, val_images, test_images
- Labels → train_labels, val_labels, test_labels
- Enhance CNN model for higher accuracy
- Real-time sentence recognition using multiple gestures
- Integrate text-to-speech for sign language output
- Deploy as a web or mobile application
- Riteeka Purnekar
- Richa Patil
- Nilesh Mahajan