Sign Language is used by deaf & dumb people who have hearing or speech problems to communicate among themselves or with normal people.
Our project aims to improve the existing models and optimize them to increase their efficiency and accuracy.
The algorithm used to detect the image and classify it is using a deep learning model called VGG-16. The algorithm works in a way where the image is sent to the model which will process it into a grayscale image and recognize the pixels forming a pattern. It will then use its pattern recognition to classify the image in of the 28 classifications - namely the 26 alphabets, delete, spacebar and no input.