Comparing Performance of Supervised Learning Classifiers by Tuning the Hyperparameter on Face Recognition

Full Text (PDF, 537KB), PP.1-13

Views: 0 Downloads: 0

Author(s)

M. Ilham Rizqyawan 1,* Ulfah Nadiya 1 Aris Munandar 1 Jony Winaryo Wibowo 1 Oka Mahendra 1 Irfan Asfy Fakhry Anto 1 Rian Putra Pratama 1 Muhammad Arifin 1 Hanif Fakhrurroja 1

1. Technical Implementation Unit for Instrumentation Development, Indonesian Institute of Sciences, Bandung, 40135, Indonesia

* Corresponding author.

DOI: https://doi.org/10.5815/ijisa.2021.05.01

Received: 12 Jul. 2021 / Revised: 10 Aug. 2021 / Accepted: 29 Aug. 2021 / Published: 8 Oct. 2021

Index Terms

Classifier, Hyperparameter, Face Recognition

Abstract

In this era, face recognition technology is an important component that is widely used in various aspects of life, mostly for biometrics issues for personal identification. There are three main steps of a face recognition system: face detection, face embedding, and classification. Classification plays a vital role in making the system recognizes a face accurately. With the growing need for face recognition applications, the need for machine learning methods are required for accurate image classification is also increasing. One thing that can be done to increase the performance of the classifier is by tuning the hyperparameter. For this study, the evaluation performance of classification is conducted to obtain the best classifier among four different classifier algorithms (decision tree, SVM, random forest, and AdaBoost) for a specific dataset by tuning the hyperparameter. The best classifier is obtained by evaluating the performance of each classifier in terms of training time, accuracy, precision, recall, and F1-score. This study was using a dataset of 2267 facial data (128D vector space) derived from the face embedding process. The result showed that SVM is the best classifier with a training time of 0.5 s and the score for accuracy, precision, recall, and F1-score are about 98%.

Cite This Paper

M. Ilham Rizqyawan, Ulfah Nadiya, Aris Munandar, Jony Winaryo Wibowo, Oka Mahendra, Irfan Asfy Fakhry Anto, Rian Putra Pratama, Muhammad Arifin, Hanif Fakhrurroja, "Comparing Performance of Supervised Learning Classifiers by Tuning the Hyperparameter on Face Recognition", International Journal of Intelligent Systems and Applications(IJISA), Vol.13, No.5, pp.1-13, 2021. DOI: 10.5815/ijisa.2021.05.01

Reference

[1]W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,“ ACM Computing Surveys, vol. 35, no. 4, pp. 399-458, 2003.
[2]M. Nusseck, D. W. Cunningham, C. Wallraven, and H. H. Bulthoff, “The contribution of different facial regions to the recognition of conversational expressions,“ Journal of Vision, vol. 8, no. 8, pp. 1-13, 2008.
[3]S. Umer, B. C. Dhara, and B. Chanda, “Face recognition using fusion of feature learning techniques,” Measurement, vol. 146, pp. 43-54, 2019.
[4]L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access, vol. 8, pp. 139110 – 139120, 2020.
[5]D. N. Parmar and B. B. Mehta, “Face recognition methods and applications,” Int. J. Comput. Tech. & Appl., vol. 4, no. 1, pp. 84-86, 2013.
[6]K. Wankhede, B. Wukkadada, and V. Nadar, “Just walk-out technology and its challenges: A case of Amazon Go,” IEEE 2018 Int. Conf. Inventive Research in Comput. Appl. (ICIRCA), 2018.
[7]Subrat Kumar Rath, Siddharth Swarup Rautaray,"A Survey on Face Detection and Recognition Techniques in Different Application Domain", IJMECS, vol.6, no.8, pp.34-44, 2014.DOI: 10.5815/ijmecs.2014.08.05
[8]A. Kumar, A. Kaur, and M. Kumar, “Face detection techniques: A review,” Artificial Intelligence Review, vol. 52, pp. 927-948, 2018.
[9]R. Khokher, R. C. Singh, and R. Kumar, “Footprint recognition with principal component analysis and independent component analysis,” Macromolecular Symp., vol. 347, no. 1, pp. 16-26, 2015.
[10]Z. Sufyanu, F. S. Mohamad, A. A. Yusuf, and A. Nuhu, “Feature extraction methods for face recognition,” Int. J. Applied Eng. Research (IRAER), vol. 5, pp. 5658-5668, 2016.
[11]P. Probst, A. Boulesteix, and B. Bischl, “Tunability: Importance of hyperparameters of machine learning algorithms,” J. Machine Learning, vol. 20, pp. 1-32, 2019
[12]P. Gaspar, J. Carbonell, and J. Oliveira, “On the parameter optimization of support vector machines for binary classification,” J. Integrative Bioinformatics, vol. 9, no. 3, pp. 33-43, 2012.
[13]E. Kremic and A. Subasi, “Performance of random forest and SVM in face recognition,” The Int. Arab J. Inf. Tech., vol. 13, no. 2, pp. 287-293, 2016.
[14]A. I. Salhi, M. Kardouchi, and N. Belacel, “Fast and efficient face recognition system using random forest and histogram of oriented gradients,” IEEE 2012 The Int. Conf. The Biometrics Special Interest Group (BIOSIG), 2012.
[15]H. Mady, and S. M. S. Hilles, S.M.S. “Face recognition and detection using random forest and combination of LBP and HOG features,” IEEE Int. Conf. Smart Comput. and Electronic Enterprise (ICSCEE2018), 2018.
[16]N. Sabri, J. Henry, and Z. Ibrahim, “A comparison of face detection classifier using facial geometry distance measure,” 2018 9th IEEE Control and Sys. Grad. Research Coll. (ICSGRC 2018), 2018.
[17]W. Changyuan, X. Pengxiang, L. Guang, and W. Qiyou, “A comparative study of face recognition classification algorithms,” Int. J. Adv. Network, Monitoring, and Controls, vol. 5, no. 3, pp. 23-29, 2020.
[18]A. Nabatchian, I. Makaremi, E. Abdel-Raheem, and M. Ahmadi, “Pseudo-Zernike moment invariants for recognition of faces using different classifiers in FERET database,” IEEE Third Int. Conf. on Convergence and Hybrid Inf. Tech., 2008.
[19]D. A. Salazar, J. I. Velez, and J. C. Salazar, “Comparison between SVM and logistic regression: Which one is better to discriminate?,” Revista Colombiana de Estadística, vol. 35, no. SPE2, pp. 223–237, 2012.
[20]S. Sperandei, “Understanding logistic regression analysis,” Biochemia Medica, vol. 24, no. 1, pp. 12–18, 2014.
[21]C. C. Chang and C. J. Lin, “LIBSVM: A library for support vector machine,” 2021. [Online]. Available: https://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf. [Accessed: 25-June- 2021].
[22]Y. Y. Song and Y. Lu, “Decision tree methods: Applications for classification and prediction," Shanghai Archives of Psychiatry, vol. 27, no. 2, pp. 130-135, 2015.
[23]B. Kami´nski, M. Jakubczyk, and P. Szufel, “A framework for sensitivity analysis of decision trees,” Central Eur. J. Oper. Research, vol. 26, pp. 135–159, 2017.
[24]T. Hastie, R. Tibshirani, and J. Friedman, The Elements of statistical learning: Data mining, inference, and prediction, 2nd ed., Springer, 2009.
[25]L. Breiman and A. Cutler, “Random Forests”, 2021. [Online]. Available: https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm. [Accessed: 27-July-2021].
[26]L. Breiman, J. Friedman, R. Olshen, and C. Stone, Classification and Regression Trees, 1st ed., Wadsworth, Belmont, CA, 1984.
[27]T. K. Ho, “The random subspace method for constructing decision forests,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832–844, 1998.
[28]E. Scornet, "Tuning parameters in random forests," ESAIM: Proceedings and Surveys, vol. 60, pp.144-162, 2018.
[29]L. Breiman, “Random Forests," Machine Learning, vol. 45, no. 1, pp. 5-32, 2001.
[30]R. Wang, “AdaBoost for feature selection, classiļ¬cation and its relation with SVM, A Review,” Physics Procedia, vol. 25, pp. 800–807, 2012.
[31]C. Papageorgiou, M. Oren, and T. Poggio, “A general framework for object detection,” IEEE The Sixth Int. Conf. on Comput. Vision, 1998.
[32]J. Zhu, S. Rosset, H. Zou, and T. Hastie, “Multi-class Adaboost,” 2006. [Online]. Available: https://web.stanford.edu/~hastie/Papers/samme.pdf. [Accessed: 25-June-2021].
[33]Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” J. of Comput. and Sys. Scie., vol. 55, pp. 119-139, 1997.
[34]Shivanand S. Gornale, Pooja U. Patravali, Kiran S. Marathe, Prakash S. Hiremath," Determination of Osteoarthritis Using Histogram of Oriented Gradients and Multiclass SVM", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.9, No.12, pp. 41-49, 2017.DOI: 10.5815/ijigsp.2017.12.05
[35]A. P. Singh and A. Kumar, "Robust face recognition system using HOG features and SVM classifier," Int. J. of Information Scie. and Appl. (IJISA), vol. 11, no. 1 (special issue), pp. 105 -109, 2019.
[36]S. Chang, D. Xiaoqing, and F. Chi, "Histogram of the oriented gradient for face recognition," Tsinghua Scie. and Tech., vol. 16, no. 2, pp. 216-224, 2011.
[37]Y. Li , Z. Wang, Y. Li, X. Zhao, and H. Huang, " Design of face recognition system based on CNN," J. Phys.: Conf. Ser., vol. 16, no.1, 2020.
[38]V. P. C. and N. K. R., “Facial expression recognition using SVM classifier,” Indonesian J. of Elect. Eng. and Inform. (IJEEI), vol. 3, no. 1, pp. 16-20, 2015.
[39]Z. Rustam and A. A. Ruvita, “Application support vector machine on face recognition for gender classification,” J. of Phys.: Conf. Series, vol. 1108, no. 012067, 2018.
[40]C. Rayani and R. K., “Face detection and recognition using support vector machine,” Int. J. of Eng. and Adv. Tech. (IJEAT), vol. 8, no. 4, pp. 382-384, 2019.
[41]S. Sharma and K. Sachdeva, “Face recognition using PCA and SVM with surf technique,” Int. J. of Comput. Appl., vol. 129, no. 4, pp. 41-46, 2015.
[42]N. Parakash and Y. Singh, “Support vector machine for face recognition,” Int. Research J. of Eng. and Tech. (IRJET), vol. 2, no. 8, pp. 1517-1529, 2015.
[43]P. S. Hiremath, Manjunatha Hiremath,"3D Face Recognition based on Radon Transform, PCA, LDA using KNN and SVM", IJIGSP, vol.6, no.7, pp.36-43, 2014.DOI: 10.5815/ijigsp.2014.07.05
[44]Alireza Tofighi, Nima Khairdoost, S. Amirhassan Monadjemi, Kamal Jamshidi,"A Robust Face Recognition System in Image and Video", IJIGSP, vol.6, no.8, pp.1-11, 2014.DOI: 10.5815/ijigsp.2014.08.01