J. Today’s Ideas - Tomorrow’s Technol.

Indian Sign Language Recognition System for Differently-able People

Er.Kanika Goyal, Amitoj Singh

  • Download PDF
  • DOI Number
    https://doi.org/10.15415/jotitt.2014.22011
KEYWORDS

Indian Sign Language, Open CV, Image Processing

PUBLISHED DATE December 2014
PUBLISHER The author(s) 2014. This article is published with open access at www.chitkara.edu.in/publications
ABSTRACT

Sign languages commonly develop in deaf communities, that can include interpreters and friends and families of deaf people as well as people who are deaf or hard of hearing themselves. Sign Language Recognition is one of the most growing fields of research today. There are Many new techniques that have been developed recently in these fields. Here in this paper, we will propose a system for conversion of Indian sign language to text using Open CV. OpenCV designed to generate motion template images that can be used to rapidly determine where that motion occurred, how that motion occurred, and in which direction it occurred. There is also support for static gesture recognition in OpenCV which can locate hand position and define orientation (right or left) in image and create hand mask image. In this we will use image processing in which captured image will be processed which are digital in nature by the digital computer. By this we will enhance the quality of a picture so that it looks better. Our aim is to design a human computer interface system that can recognize language of the deaf and dumb accurately.

INTRODUCTION

One of the most precious gifts of nature to the human breed is the ability to express himself by responding to the events occurring in his surroundings.

Every normal person sees, listens and then reacts to situations by speaking himself out in the environment. But there are some less fortunate people those are deprived of many valuable gift. Such impaired people rely on some sort of sign language for communicating their feelings to others. The deaf, dumb and the blind follow same problems or issues, when it comes to the use of computers.

In the era of advanced technology, where computers or laptops and other processor based devices are an integral part of day to day life, efforts are required to be done for making the disables more independent in life. Inspite of that, there are people who are less fortunate than us and are physically i, may it be deafness or being aphonic. Such people lag behind impaired, their non-handicapped peers in using these technologies. These people have some expectations from the researchers and mostly from a computer scientist that we, computer scientists can provide some machine/model which help them to communicate and express their feelings with others.

Very few researchers’ have them in mind and provid their continuous works for such people. Communication is th most important part of life. around 1% of the total populatio of the world is suffering from hearing impairment, and thei life is not as simple and easy as it is for human withou limitations..

Finding an experienced and qualified interpreter ever time is a difficult work and also unaffordable. moreover people who are not impaired, never try to learn sign languag for interacting with the impaired people. This becomes a caus of isolation of the impaired people. But if the system can b programmed in such a way that it can translate sign languag to text , the difference between the normal people and th impaired people can be reduced.

Closely related with image processing are compute graphics and computer vision. In computer graphics, image are processed from physical models of objects and lighting instead of being acquired (via imaging devices such as vide cam, cameras) from natural scenes, as like in most animate movies. Computer vision, on the other hand, is considered a the high-level image processing out of which machine/computer/software intends to decipher the physica contents of an image or a sequence of images (e.g., videos o 3D full-body magnetic resonance scans).

Page(s) 145-151
URL http://dspace.chitkara.edu.in/jspui/bitstream/1/497/1/22011_JOTITT_Kanika.pdf
ISSN Print : 2321-3906, Online : 2321-7146
DOI https://doi.org/10.15415/jotitt.2014.22011
CONCLUSION

The system will provide an interface that can easily communicate with deaf people by Sign language recognition. The system is not only can apply in family environment, but also can apply in public. For the Social use these system is very helpful for deaf and dumb people.

We will build simple gesture recognizer based on OpenCv toolkit and integrated it into Visionary framework. as a yes gesture we will mark up and down hand motions no matter which hand is used.

REFERENCES
  • A . S. Ghotkar, R. Khatal, S. Khupase, S. Asati, and M. Hadap, “Hand Gesture Recognition for Indian Sign Language”, IEEE International Conference on Computer Communication and Informatics (ICCCI), Jan. 10-12, 2012, Coimbatore, India.
  • C. Yu, X. Wang, H. Huang, J. Shen, and K. Wu, “Vision-Based Hand Gesture Recognition Using Combinational Features”, IEEE Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2010, pp. 543-546.
  • d. y. Huang, w. c. Hu, and s. h. Chang, “Vision-based Hand Gesture Recognition Using PCA+Gabor Filters and svm”, ie Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2009, pp. 1-4.
  • I. G. Incertis, J. G. G. Bermejo, and E.Z. Casanova, “Hand Gesture Recognition for Deaf People Interfacing”, The 18th International Conference on Pattern Recognition (ICPR), 2006.
  • J. Rekha, J. Bhattacharya, and S. Majumder, “Shape, Texture and Local Movement Hand Gesture Features for Indian Sign Language Recognition”, IEEE, 2011, pp. 30-35.
  • J. Weissmann and R. Salomon, “Gesture Recognition for Virtual Reality Applications Using Data Gloves and Neural Networks”, IEEE, 1999, pp. 2043-2046.
  • J. H. Kim, N. D. Thang, and T. S. Kim, “3-D Hand Motion Tracking and Gesture Recognition Using a Data Glove”, IEEE International Symposium on Industrial Electronics (ISIE), July 5-8, 2009, Seoul Olympic Parktel, Seoul, Korea, pp. 1013-1018.
  • J. L. Raheja, K. Das, and A. Chaudhury, “An Efficient Real Time Method of Fingertip Detection”, International Conference on Trends in Industrial Measurements and automation (TIMA), 2011, pp. 447-450.
  • L . K. Lee, S. Y. An, and S. Y. Oh, “Robust Fingertip Extraction with Improved Skin Color Segmentation for Finger Gesture Recognition in Human-Robot Interaction”, WCCI 2012 IEEE World Congress on Computational Intelligence, June, 10-15, 2012, Brisbane, Australia.
  • M . V. Lamar, S. Bhuiyan, and A. Iwata, “Hand Alphabet Recognition Using Morphological PCA and Neural Networks”, IEEE, 1999, pp. 2839-2844.
  • M anigandan M. and I. M Jackin, “Wireless Vision based Mobile Robot control using Hand Gesture Recognition through Perceptual Color Space”, IEEE International Conference on Advances in Computer Engineering, 2010, pp. 95-99.
  • O. B. Henia and S. Bouakaz, “3D Hand Model Animation with a New Data-Driven Method”, Workshop on Digital Media and Digital Content Management (IEEE Computer Society), 2011, pp. 72-76.
  • Pallavi Gujral, Kiran Kunnur,” Real Time Hand Gesture using SIFT”, International Journal for Electronics and Engineering, 2012, pp. 19-33.
  • r. Gopalan and b. Dariush, “Towards a Vision Based Hand Gesture Interface for Robotic Grasping”, The ie/rsj International Conference on Intelligent Robots and Systems, October 11-15, 2009, St. Louis, usa, pp. 1452-1459.
  • S. Saengsri, V. Niennattrakul, and C.A. Ratanamahatana, “TFRS: Thai Finger-Spelling Sign Language Recognition System”, IEEE, 2012, pp. 457-462.
  • S. K. Yewale and P. K. Bharne, “Hand Gesture Recognition Using Different Algorithms Based on Artificial Neural Network”, IEEE, 2011, pp. 287-292.
  • t. Kapuscinski and m. Wysocki, “Hand Gesture Recognition for Man-Machine interaction”, Second Workshop on Robot Motion and Control, October 18-20, 2001, pp. 91-96.
  • W. W. Kong and S. Ranganath, “Sign Language Phoneme Transcription with PCA-based Representation”, The 9th International Conference on Information and Communications Security(ICICS), 2007, China.
  • Y . Fang, K. Wang, J. Cheng, and H. Lu, “A Real-Time Hand Gesture Recognition Method”, IEEE ICME, 2007, pp. 995-998.