Paper

A Machine-Vision Technique for Automated American Sign-Language Alphabets Recognition


Authors:
Aaron R. Rababaah
Abstract
With the technological trend in man-machine interfaces and the machine intelligence, exploiting these powers has become a challenge in many fields. In particular, it was observed that the body gesture-based interactions of human to human and human to machine are rapidly increasing, especially in the area of sign language interpretation. Statistics in the United States of America strongly suggest that the population of deaf and mute people is on the rise and there is a need to train more people the American Sign Language (ASL) to bridge the gap. Furthermore, the electronic devices such as TVs, PCs, PDAs Robots, cameras, etc. are advanced and built to read users’ gestures and respond to their commands. Therefore, it is of a great interest to try to conduct research in this area and propose efficient and effective solutions for man-machine gesture-based interaction. In this study, a system for automatic American Sign Language alphabets recognition was proposed and developed. The adopted approach is different from previous work in the sense it provides a multi-color based encoding scheme to establish the different signatures or patterns of the different hand-signs. A series of image and vector processing operations were used in order to transform a visual hand gesture into a spoken letter and displayed text. The domain and scope of this study is the standard American Sign Language alphabets. The experimental results of the developed system indicate that the system is effective with an accuracy greater than 93%.
Keywords
Machine Vision; Machine Intelligence;American Sign Language;Auto Recognition;Image Processing; Color-based Segmentation
StartPage
159
EndPage
168
Doi
Download | Back to Issue| Archive