Cover page and Table of Contents: PDF (size: 699KB)
Full Text (PDF, 699KB), PP.49-60
Views: 0 Downloads: 0
Sign language, Video sequence, Key frame, Interval valued features
In this paper, we propose a model for recognition of sign language being used by communication impaired people in the society. A novel method of extracting features from a video sequence of signs is proposed. Key frames are selected from a given video shots of signs to reduce the computational complexity yet retaining the significant information for recognition. A set of features is extracted from each key frame to capture the trajectory of hand movements made by the signer. The same sign made by different signers and by the same signers at different instances may have variations. The concept of symbolic data particularly interval type data is used to capture such variations and to efficiently represent signs in the knowledgebase. A suitable similarity measure is explored for the purpose of matching and recognition of signs. A database of signs made by communication impaired people of Mysore region is created and extensive experiments are conducted on this database to demonstrate the performance of the proposed approach.
Nagendraswamy H S, Chethana kumara B M, Guru D S, Naresh Y G,"Symbolic Representation of Sign Language at Sentence Level", IJIGSP, vol.7, no.9, pp.49-60, 2015. DOI: 10.5815/ijigsp.2015.09.07
Bahare Jalilian, Abdolah Chalechale,: Persian Sign Language Recognition Using Radial Distance and Fourier Transform", IJIGSP, vol.6, no.1, pp.40-46, 2014.DOI: 10.5815/ijigsp.2014.01.06
Ding L, Martinez AM.,: Modeling and Recognition of the Linguistic Components in American Sign Language. Image Vis Comput 27(12) : 1826-1844. Nov, 2009.
Dorner.,: Hand shape identification and tracking for sign language interpretation. In IJCAI Workshop on Looking at People. 1993.
Guru D. S. and Nagendraswamy H.S.,: Symbolic representation of two-dimensional shapes. Pattern Recognition Letters. Jan, 2007.
Helen Cooper, Eng-Jon Ong, Nicolas Pugeault and Richard Bowden.,: Sign Language Recognition using Sub-Units. Journal of Machine Learning Research 13 (2012) 2205-2231.2012.
Joyeeta Singha and Karen Das: Recognition of Indian Sign Language in Live Video. International Journal of Computer Applications (0975 – 8887) vol 70– No.19, May 2013.
Justus Piater, Thomas Hoyoux and Wei Du.,: Video Analysis for Continuous Sign Language Recognition. 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, 2010 (Workshop at the 7th International Conference on Language Resources and Evaluation (LREC), (Malta). 2010.
Mohan Kumar H P, Nagendraswamy H S,: Change Energy Image for Gait Recognition: An Approach Based on Symbolic Representation", IJIGSP, vol.6, no.4, pp.1-8, 2014.DOI: 10.5815/ijigsp.2014.04.01
Nagendraswamy H. S., Guru D. S., : A New Method of Representing and Matching Two Dimensional Shapes. Int. J. Image Graphics 7(2) : 377-405. 2007.
Naresh Y. G. and Nagendraswamy H. S.,: Representation and Classification of Medicinal Plant Leaves: A Symbolic Approach. Multimedia Processing, Communication and Computing Applications, Lecture Notes in Electrical Engineering Volume 213, 2013, pp 91-102. 2013.
Nayak, S, Sarkar S., and Loeding B.,: Automated extraction of signs from continuous sign language sentences using Iterated Conditional Modes. Computer Vision and Pattern Recognition, 2009. pp- 2583 – 2590. 2009.
Richard Bowden, David Windridge, Timor Kadir, Andrew Zisserman and Michael Brady.,: A Linguistic Feature Vector for the Visual Interpretation of Sign Language.
Ruiduo Yang and Sudeep Sarkar.,: Handling Movement Epenthesis and Hand Segmentation Ambiguities in Continuous Sign Language Recognition Using Nested Dynamic Programming. IEEE transactions on pattern analysis and machine intelligence, vol. 32, no.3, pp 462-477. 2010.
Suraj M. G. and Guru D. S.,: Appearance Based Recognition Methodology for Recognizing Fingerspelling Alphabets. IJCAI 2007: 605-610, 2007.
Suraj M.G. and Guru D. S.,: Secondary Diagonal FLD for Fingerspelling Recognition. ICCTA 2007: 693-697, 2007.
Sylvie C W Ong and Surendra Ranganath.,: Automatic Sign Language Analysis: A Survey and the future beyond lexical meaning. IEEE transaction on Pattern Analysis and Machine Intelligence, vol 27, no 6, 2005.
Tsai, D.M., Chen, M.F., Object recognition by linear weight classifier. Pattern Recognition Letters. 16 : 591–600. 1995.
Tan Dat Nguyen and Surendra Ranganath.,: Recognizing Continuous Grammatical Marker Facial Gestures in Sign Language Video. Proceeding ACCV'10 Proceedings of the 10th Asian conference on Computer vision – vol., Part IV pp 665-676 Springer-Verlag Berlin, Heidelberg , 2011.
Thad Starner, Joshua Weaver, and Alex Pentland.,: Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video. IEEE Transactions on Pattern Analysis and Machine Intelligence ,vol., 20, pages 1371—1375.1998
Pingale Prerna Rambhau.,: Recognition of Two Hand Gestures of word in British Sign Language (BSL) ., IJSRP, Vol 3, Issue 10, October 2013 .
Wu jiangqin, Gao wen, Song yibo, Liu wei and Pang bo.,: A simple sign language recognition system based on data glove , Fourth International Conference on Signal Processing Proceedings, 1998.
Jiangqin Wu and Wen Gao.,: The Recognition of Finger-Spelling for Chinese Sign Language. International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction (GW '01), Springer-Verlag,
Saengsri S. Niennattrakul V. and Ratanamahatana C.A.,: TFRS: Thai finger-spelling sign language recognition system, Digital Information and Communication Technology and it's Applications (DICTAP), Second International Conference on DICTAP , pp.457,462,2012.
Starner T.,: Visual Recognition of American Sign Language Using Hidden Markov Models, Master Thesis, MIT, Media laboratory. Feb.1995.