International Journal of Image, Graphics and Signal Processing(IJIGSP)

ISSN: 2074-9074 (Print), ISSN: 2074-9082 (Online)

Published By: MECS Press

IJIGSP Vol.8, No.4, Apr. 2016

A Study of the Effect of Emotions and Software on Prosodic Features on Spoken Utterances in Urdu Language

Full Text (PDF, 759KB), PP.46-53

Views:96   Downloads:2


Syed Abbas Ali, Maria Andleeb, Danish ur Rehman

Index Terms

Prosodic Features;Speech Emotion;Urdu Language;Two-way ANOVA Testing


Speech emotions have potential to provide valuable source of information which can lead us toward human perception and decision making process. This paper analyzes the variation and effect on prosodic features (Formant and Pitch) of female and male speakers in two different emotions (angry and neutral) and softwares (PRAAT and MATLAB) in Urdu language using two ways ANOVA testing. The objective of this paper is to determine the significant effect of emotions and softwares on prosodic features (Pitch and Formant) using recorded speech emotion of both male and female voices of same age group in Urdu language. Experimental results of two-way ANOVA testing considerably show that emotions have effect on pitch and formant both in male and female voice unlike software. 

Cite This Paper

Syed Abbas Ali, Maria Andleeb, Danish ur Rehman,"A Study of the Effect of Emotions and Software on Prosodic Features on Spoken Utterances in Urdu Language", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.8, No.4, pp.46-53, 2016.DOI: 10.5815/ijigsp.2016.04.06


[1]S.R. Karathapalli, S.G. Koolagudi, "Emotion recognition using speech features", Springer Science+ Business Media New York (2013).

[2]P. Ekman, "An argument for basic emotions", Cognition and Emotion, Vol. 6, pp. 169-200, 1992.

[3]B.L. Jena, B. P. Panigrahi, "Gender Classification by Pitch Analysis", International Journal on Advanced Computer Theory and Engineering (IJACTE), vol 1, 2012.

[4]H. Boril, P. Pollak, "Direct Time Domain Fundamental Frequency estimation of Speech in noisy condition" in Proceedings of European Signal Processing conference, pp. 1003-1006, 2004.

[5]S.K Husnain, Azam Beg, Muhammad Samiullah Awan, "Frequency Analysis of Spoken Urdu Numbers using MATLAB and Simulink", PAF KIET Journal of Engineering & Sciences, vol. 1, p. 5, December 2007.

[6]D. E. Re, J. J. M. O'Connor, P.J. Bennett, D. R. Feinberg, "Preferences for Very Low and Very High Voice Pitch in Humans", PLOS one, vol.7, March 5, 2012. 

[7]L. Qin, H. Yang and S. N. Koh, "Estimation of Continuous Fundamental Frequency Of Speech Signals" in proceedings of Speech Science and Technology Conference, Perth, 1994.

[8]V. Mohan, "Analysis and Synthesis of Speech using MATLAB", International Journal of advancements in Research and Technology, vol.2, pp. 373-382, May 2013.

[9]K.H. Hyun, "Improvement of emotion recognition by Bayesian classifier using non-zero-pitch concept", in proceedings of IEEE International Workshop on Robot and Human Interactive Communication, pp 312-316, 2005.

[10]C-L-Chaung, "Phonetics of Speech Arts: A pilot Study", in proceedings of 24th conference on computational linguistics and speech processing, September 21-22, 2012, Chung-Li, Taiwan.

[11]S. Zhang, R. C. Repetto, X. Serra, "Study of the Similarity between linguistic tones and melodic pitch contours in Beijing Opera Singing", in proceedings of 15th International Society for Music Information Retrieval Conference , pp 343-348, 2014.

[12]A. Bahmer, U. Baumann, "Psychometric function of jittered rate pitch discrimination", Elsevier, pp 47-54, 10th May 2014.

[13]H. Y. Lin and J. Fon, "Prosodic and acoustic features of emotional speech in Taiwan Mandarin in Proceedings of 6th International Conference on Speech Prosody, pp. 450–453, 2012.

[14]N. Wang and A. J. Oxenham, "Spectral motion contrast as a speech context effect", Journal of Acoustic Society of America, Vol. 136, pp 1237-1245, September 2014.

[15]M. A Yusnitaa, M.P. Paulrajb, Sazali Yaacobb, M. Nor Fadzilaha, A.B. Shahriman, "Acoustic Analysis of Formants Across Genders and Ethnical Accents in Malaysian English Using ANOVA", Elsevier, Procedia Engineering, Vol. 64, pp 385-394, 2013. 

[16]Q. Miao, X. Niu, E. Klabbers, J.V Santen, "Effects of Prosodic Factors on Spectral Balance: Analysis and Synthesis", in speech prosody, Dresden, Germany, 2006. 

[17]H.Y. Lin and J.Fon, "Prosodic and acoustic features of emotional speech in Taiwan Mandarin" in Proceedings of 6th International Conference on Speech Prosody, pp. 450–453, Taiwan, 2012.

[18]P. Mouawad , M.D Catherine , A.G Petit, C.Semal, "The role of the singing acoustic cues in the perception of broad affect dimensions", in proceedings. of the 10th International Symposium on Computer Music Multidisciplinary Research, Marseille, France, October 2013.

[19]Zhan J, Ren J, Fan J and Luo J, "Distinctive effects of fear and sadness induction on anger and aggressive behavior" Frontiers in Psychology, vol. 6:725, pp 1-12, 15 June 2015.


[21]A.S. Utane and S.L. Nalbalwar, ―Emotion recognition through Speech‖ International Journal of Applied Information Systems (IJAIS), pp.5-8, 2013

[22]E. Bozkurt, E, Erzin, C. E. Erdem, A. Tanju Erdem, ―Formant Position Based Weighted Spectral Features for Emotion Recognition‖, Science Direct Speech Communication, 2011.

[23]S.A. Ali., S Zehra., ―Development and Analysis of Speech Emotion Corpus Using Prosodic Features for Cross Linguistic‖, International Journal of Scientific & Engineering Research, Vol. 4, Issue 1, January 2013.

[24]P.P. Vaidyanathan, "MultiRate digital filters, Filter bank, Polyphaser network, and application: A tutorial", in proceedings of IEEE vol.78, pp.56-93, Jan 1990.

[25]S. Davis, P. Mermelstein, "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences," IEEE Trans. on Acoustics, Speech and Signal Processing, vol. 28, pp. 357-366, Aug 1980.

[26]S. Kwong and K. F. Man, "A Speech Coding Algorithm based on Predictive Coding," In Proc. Data Compression Conference, Hong Kong, pp.455, March 1995.