Abstract: Communication is an integral part of human life. But for people who are mute & hearing impaired, communication is a challenge. To understand them, one has to either learn their language i.e. sign language or finger language. The system proposed in this project aims at tackling this problem to some extent. In this paper, the motivation was to create an object tracking application to interact with the computer, and develop a virtual human computer interaction device. The motivation behind this system is two-fold. It has two modes of operation: Teach and Learn. The project uses a webcam to recognize the hand positions and sign made using contour recognition [3] and outputs the Sign Language in PC onto the gesture made. This will convert the gesture captured via webcam into audio output which will make normal people understand what exactly is being conveyed. Thus our project Sign Language to Speech Converter aims to convert the Sign Language into text and audio.

 

Keywords: Gesture recognition, image processing, Edge detection, Grey scale image