Detecting Happiness in Human Face using Unsupervised Twin-Support Vector Machines

Full Text (PDF, 2907KB), PP.85-98

Views: 0 Downloads: 0


Manoj Prabhakaran Kumar 1,* Manoj Kumar Rajagopal 1

1. School of Electronics Engineering, VIT University, Chennai, Tamil Nadu, India

* Corresponding author.


Received: 25 Sep. 2017 / Revised: 28 Oct. 2017 / Accepted: 27 Nov. 2017 / Published: 8 Aug. 2018

Index Terms

Constrained Local Model (CLM), Facial Animation Parameters (FAPs), Minimal Feature Vector Displacement, Twin-Support Vector Machines (TWSVM)


This paper aims to finding happiness in human face with minimal feature vectors. In this system, the face detection and tracking are carried out by Constrained Local Model (CLM). Using CLM grid node, the entire and minimal feature vector displacement is obtained through extracted features. The feature vector displacements are computed in multi-classes of Twin- Support Vector Machines (TWSVM) classifier to evaluate the happiness. In training and testing phases, the following databases are used such as MMI database, Cohn-Kanade (CK), Extended-CK, Mahnob-Laughter and also Real Time data. Also, this paper compares the Supervised Support Vector Machines and Unsupervised Twin Support Vector Machines classifier with cross data-validation. Using the normalization of Min-max and Z-norm technique, the overall accuracy of finding happiness are computed as 86.29% and 83.79% respectively.

Cite This Paper

Manoj Prabhakaran Kumar, Manoj Kumar Rajagopal, "Detecting Happiness in Human Face using Unsupervised Twin-Support Vector Machines", International Journal of Intelligent Systems and Applications(IJISA), Vol.10, No.8, pp.85-98, 2018. DOI:10.5815/ijisa.2018.08.08


[1]Richard Wiseman and Rob Jenkins Roger Highfield, How your looks betray your personality. New Scientist, Feb 2009.
[3]J Montagu., "The Expression of the Passions: The Origin and Influence of Charles Le Brun’s ‘Conférence sur l'expression générale et particulière," Yale University press, 1994.
[4]Charles Darwin, The Expression of the Emotions in Man and Animals, 2nd ed., J Murray, Ed. London, 1904.
[5]William James, "What is Emotion," Mind, vol. 9, no. 34, pp. 188-205, 1884
[6]Paul Ekman and Wallace.V. Friesen, "Pan-Cultural Elements In Facial Display Of Emotion," Science, pp. 86-88, 1969.
[7]M.Suwa, K.Fujimora and N.Sugie, "A Perliminary note on pattern recognition of human emotional expression," in Proc. 4th Int.Joint Conf. on Pattern Recognition, pp. 408-410, 1978.
[8]K.Mase and A.Pentland, "Recognition of facial expression from optical flow," IEICE TRANSACTIONS on Information and Systems, vol. 74, no. 10, pp. 3474-3483, 1991.
[9]Samal Ashok and Prasana A. Iyengar., "Automatic recognition and analysis of human faces and facial expressions: A survey.," Pattern recognition, vol. 1, no. 25, pp. 65-77, 1992.
[10]Stefanos Kollias and Kostas Karpouzis, "Multimodal Emotion Recognition And Expressivity Analysis," IEEE Int Conf on Mul and Expo, pp. 779-783, 2005.
[11]Hanan Salam, "Multi-Object modeling of the face," Université Rennes 1, Signal and Image processing, Thesis 2013.
[12]Paul Ekman and W.V. Friesen, "Facial action coding system," Stanford University, Palo Alto, Consulting Psychologists Press 1977.
[13]Marian Stewart Bartlett, Gwen Littlewort, Ian Fasel, and Javier R., Movellan, "Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction." in Computer Vision and Pattern Recognition Workshop, 2003. CVPRW '03. Conference on , vol. 5, no. 53, pp. 16-22 , June 2003.
[14]Ying-li, Takeo Kanade, and Jeffrey F. Cohn. Tian, "Recognizing action units for facial expression analysis.," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 23, no. 2, pp. 97-115., 2001.
[15]I. Kotsia and I. Pitas, "Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines," IEEE Trans. Image Processing, vol. 16, no. 1, pp. 172-187, 2007.
[16]Gianluca, et al Donato, "Classifying facial actions.," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 21, no. 10, pp. 974-989, 1999.
[17]MPEG Video and SNHC, Text of ISO/IEC FDISDoc. Audio (MPEG Mtg ), Atlantic City, ISO/MPEG N2503. 14 496–3, 1998.
[18]Igor S., and Robert Forchheimer Pandzic, “MPEG-4 Facial Animation: The Standard, Implementation and Applications”. Chichester, England: John Wiley&Sons, 2002.
[19]A. Murat Tekalp and Joern Ostermann, "Face and 2-D mesh animation in MPEG-4.," Signal Processing: Image Communication, vol. 15, no. 4, pp. 387-421. 2000.
[20]N. Tsapatsoulis, K. Karpouzis, and S. Kollias, A. Raouzaiou, "Parameterized facial expression synthesis based on MPEG-4.," EURASIP Journal on Advances in Signal Processing, vol. 10, pp. 1-18, 2002.
[21]Curtis and Garrison W. Cottrell. Padgett, "Representing face images for emotion classification." Advances in neural information processing systems, pp. 894-900. 1997.
[22]Littlewort, Gwen, et al., "Dynamics of facial expression extracted automatically from video," Image and Vision Computing, vol. 24, no. 6, pp. 615-625., 2006.
[23]Yongmian Zhang, Qiang Ji, Zhiwei Zhu, and Beifang Yi, "Dynamic Facial Expression Analysis and Synthesis With MPEG-4 Facial Animation Parameters," IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 10, pp. 1383-1396, Oct 2008.
[24]M. Pantic and I Patras, "Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences," in Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 36, no. 2, pp. 433-449, 2006.
[25]Jason M.Saragih, Simon Lucey and Jeffrey F. Cohn, "Deformable model fitting by regularized landmark mean-shift.," International Journal of Computer Vision, vol. 91, no. 2, pp. 200-215, 2011.
[26]David Cristinacce and Timothy F. Cootes, "Feature Detection and Tracking with Constrained Local Models," British Machine Vision Conference, vol. 2, no. 5, 2006.
[27]Simon Lucey, Jeffrey F. Cohn and Wang Yang, "Enforcing convexity for improved alignment with constrained local models." Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008.
[28]B. Silverman, "Density estmaation for statistics and data analysis." Londaon/Boca Raton, Chapman & Hall/CRC Press 1986.
[29]Yizong Cheng, "Mean shift, mode seeking, and clustering.," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 17, no. 8, pp. 790-799., 1995.
[30]Leon Gu and Takeo Kanade, "A generative shape regularization model for robust face alignment." Computer Vision–ECCV. Springer Berlin Heidelberg, pp. 413-426., 2008.
[31]Khemchandani, R., and Suresh Chandra. "Twin Support Vector Machines for Pattern Classification." IEEE Transactions on Pattern Analysis and Machine Intelligence vol.29,no.5, pp: 905-910, 2007.
[32]Khemchandani, Reshma, and Suresh Chandra. “Twin Support Vector Machines: Models, Extensions and Applications”. Vol. 659. Springer, 2016.
[33]Divya Tomar, Sonali Agarwal, Twin Support Vector Machine: A review from 2007 to 2014, Egyptian Informatics Journal, Vol: 16, Issue 1 , Pages 55-69. 2015.
[34]Chih-Chung Chang and Chih-Jen Lin, LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1--27:27, 2011.
[35]Michel.F.Valstar and Maja Pantic., "Induced disgust, happiness and surprise: an addition to the mmi facial expression database." Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect, 2010.
[36]Stavros Petridis, Brais Martinez, Maja Pantic, The MAHNOB Laughter database, Image and Vision Computing, Volu:31, Iss:2, Pages 186-202, 2013.
[37]Kanade, Takeo, Jeffrey F. Cohn, and Yingli Tian. "Comprehensive database for facial expression analysis." Automatic Face and Gesture Recognition, Proceedings. Fourth IEEE International Conference on. IEEE, 2000.
[38]P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews, "The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression," 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, San Francisco, CA, pp. 94-101. 2010
[39]Zhengbing Hu, Yevgeniy V.Bodyanskiy,Nonna Ye.Kulishova, Oleksii K.Tyshchenko, “”A Multidimensional Extended Neo-Fuzzy Neuron for Facial Expression Recogntion”, International Journal Of Intelligent Systems and Application (IJISA), Vol.9, no.9, pp.29-36, 2017. DOI: 10.5815/ijisa.2017.09.04.
[40]G.P.Hedge, M.Seetha, “Subspace based Expression Recognition Using Combinational Gabor Based Feature Fusion”, International Journal of Image, Graphics and Signal Processing (IJIGSP), Vol.9, No.1, pp.50-60, 2017. Doi:10.5815/ijigsp.2017.01.07.
[41]Hai, Tran Son, L.H.Thai, and N.T.Thuy. “Facial Expression Classification Using Artificial Neural Network and K-Nearest Neighbor, International Journal of Information Technology and Computer Science, Vol.7, no.3, pp: 27-32. 2015.
[42]G. S. Murty, J Sasi Kiran, V. Vijay Kumar, “Facial Expression Recognition based on Features derived from the Distinct LBP and GLCM”, International Journal of Image, Graphics and Signal Processing (IJIGSP), vol.6, no.2, pp. 68-77, 2014. DOI: 10.5815/ijigsp.2014.02.08.
[43]Manoj Prabhakran Kumar and Manoj Kumar Rajagopal. (2016, april) Manoj prabhakran.typeform. [Online].