Download PDFOpen PDF in browser

Using Computer Vision and Deep Learning to Aid the Deaf

EasyChair Preprint no. 6816

5 pagesDate: October 9, 2021


This paper talks about the use of computer vision and machine learning to create a sign language translator for the deaf to convey their message to the general public. Majority of the world doesn’t understand sign language and it makes it harder for the deaf to have a normal interaction. However, by applying these latest technologies it can bridge this gap and make the life of both the less abled and the abled easier. By detecting the gestures made by a person using sign language, it is possible to translate it into a spoken language. Thus, making sign language translatable like any other language. Giving a voice to the people who haven’t been born with a voice is the primary goal of this piece of technology. We propose a solution where there is no bulky equipment required or any new modifications needed for the translation.

Keyphrases: computer vision, contour detection, Convolutional Neural Networks, deep learning, Gesture Detection, gesture recognition, machine learning, Open CV, Python, sign language, Sign language translator

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {V Vivek and L V Vishak and Vipin R Bharadwaj and H S Gururaja},
  title = {Using Computer Vision and Deep Learning to Aid the Deaf},
  howpublished = {EasyChair Preprint no. 6816},

  year = {EasyChair, 2021}}
Download PDFOpen PDF in browser