A software able to recognize American Sign Language alphabet in real-time from web-cam using openCV and a CNN Model
• Support for 25 alphabet letters z is exculded due to it including motion gesture
• Support for 3 extra gestures (delete, space, nothing)
• Display of real-time confidence in the prediction
• Choosing a Region-Of-Interest in the live-cam feed to detect letters from
Results.mp4
The model was trained on the ASL Alphabet dataset from kaggle
Download the Driver Notebook1 and model weights
Run the cell codes and wait for the live-cam feed to start
Select The ROI
Start trying different gestures
1 Any IPython notebook would work however it's recommended to use Jupyter Notebook from Anaconda