Convolutional Neural Network Approach for Identity Verification in Computer-Based Testing Exams in Nigeria

PDF (1278KB), PP.50-65

Views: 0 Downloads: 0

Author(s)

Ogochukwu C. Okeke 1 Anthony T. Umerah 2,* Ike J. Mgbeafulike 1 Osita M. Nwakeze 1

1. Department of Computer Science, Chukwuemeka Odumegwu Ojukwu University, Uli, Anambra State, Nigeria

2. Department of Computer Engineering, Federal University of Technology, Owerri, Imo State, Nigeria

* Corresponding author.

DOI: https://doi.org/10.5815/ijmsc.2025.04.05

Received: 11 Aug. 2025 / Revised: 22 Oct. 2025 / Accepted: 24 Nov. 2025 / Published: 8 Dec. 2025

Index Terms

Computer-Based Testing (CBT), Exam Cheating, Convolutional Neural Networks (CNNs), Facial Biometric, Exam Integrity, Nigeria

Abstract

Computer-Based Testing (CBT) has gained prominence in Nigeria due to its efficiency and scalability in evaluating students across various educational institutions. However, various forms of exam cheating, such as candidate swapping and unauthorised assistance, threaten its integrity. This research explores the application of Convolutional Neural Networks (CNNs) for identity verification in Nigerian CBT environments and presents a CNN-driven facial biometric model based on the findings. The model extracts facial features of examinees from real-time videos of CBT exam sessions, and it compares them with pre-registered data to verify test takers' identities, as well as to detect and report instances of candidate swapping and unauthorised assistance during the ongoing exam. The model is trained on diverse datasets like VGGFace2 and CASIA African Face Dataset to enhance fairness and accuracy for African demographics. This ensures effectiveness in Nigerian Computer-Based Testing (CBT) and local contexts. Evaluation of the model and its comparative analysis with existing systems and other biometric methods were performed. The assessment involved 2,000 genuine and 3000 impostor samples, achieving 99.52% accuracy with high precision and recall of 0.998 and 0.99, respectively. The results demonstrate the model’s high accuracy, low false acceptance, and minimal false rejection rates, and highlight the model’s viability in maintaining exam integrity and accessibility.

Cite This Paper

Ogochukwu C. Okeke, Anthony T. Umerah, Ike J. Mgbeafulike, Osita M. Nwakeze, "Convolutional Neural Network Approach for Identity Verification in Computer-Based Testing Exams in Nigeria", International Journal of Mathematical Sciences and Computing(IJMSC), Vol.11, No.4, pp. 50-65, 2025. DOI: 10.5815/ijmsc.2025.04.05

Reference

[1]O. A. Akinyoola, “Historical evaluation of JAMB and its efficiency in educational development in Nigeria, 1978–2019,” Continental Journal of Arts and Humanities, vol. 11, no. 1, pp. 11–23, 2019, doi: 10.5281/zenodo.3362224.
[2]The Guardian Nigeria, “AI-assisted image morphing, finger blending among 6,000 exam fraud cases – JAMB panel,” The Guardian Nigeria, Sep. 9, 2025. [Online]. Available: https://guardian.ng/news/nigeria/national/ai-assisted-image-morphing-finger-blending-among-6000-exam-fraud-cases-jamb-panel/
[3]Nairametrics, “JAMB panel uncovers 4,251 cases of fingerprint fraud and 192 instances of AI-driven impersonation in the 2025 UTME,” Nairametrics, Sep. 9, 2025. [Online]. Available: https://nairametrics.com/2025/09/09/jamb-panel-uncovers-4251-cases-of-fingerprint-fraud-192-ai-driven-impersonation-in-2025-utme/
[4]A. T. Umerah et al., “AI-driven diagnostic imaging: Hybrid CNN–GNN models for early detection of cancer from pathological images,” Int. J. Latest Technol. Eng. Manag. Appl. Sci. (IJLTEMAS), vol. 14, no. 9, pp. 579–588, 2025, doi: 10.51583/IJLTEMAS.2025.1409000070.
[5]G. D. Chate and S. S. Bhamare, “Classification of soil images using convolutional neural network,” International Journal of Image, Graphics and Signal Processing, vol. 17, no. 5, pp. 26–41, 2025, doi: 10.5815/ijigsp.2025.05.03.
[6]S. Chukka, V. Jagtap, N. Patel, S. Jadhav, M. Cherian, and J. Melvin Y. I., “Evaluation of Deep Learning Approaches to Detect Choroidal Neovascularization,” Int. J. Image, Graphics and Signal Process., vol. 17, no. 4, pp. 105–127, 2025.
[7]A. W. Muzaffar et al., “A systematic review of online exams solutions in e-learning: Techniques, tools, and global adoption,” IEEE Access, vol. 9, pp. 1–1, 2021, doi: 10.1109/ACCESS.2021.3060192.
[8]Z. Li et al., “A survey of convolutional neural networks: Analysis ...,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–21, 2021, doi: 10.1109/TNNLS.2021.3084827.
[9]C. A. Anameje and O. C. Okeke, “Design of a CNN-based extractor for optimal classification of pancreatic cancer,” Int. J. Artif. Intell. Trends, vol. 5, no. 2, pp. 564–575, 2025. [Online]. Available: https://ijortacs.com/uploads/papers/20f2091adca74209127c74f9acdd5904-ijortacs.pdf
[10]G. W. Lindsay, “Convolutional neural networks as a model of the visual system: Past, present, and future,” J. Cogn. Neurosci., vol. 32, no. 8, pp. 1403–1421, 2020, doi: 10.1162/jocn_a_01544.
[11]AnalytixLabs, “Convolutional neural network: Layers, types, & more,” AnalytixLabs Blog, Jan. 28, 2024. [Online]. Available: https://www.analytixlabs.co.in/blog/convolutional-neural-network/
[12]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998, doi: 10.1109/5.726791.
[13]A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. 25th Int. Conf. Neural Inf. Process. Syst. (NIPS), Lake Tahoe, NV, USA, Dec. 2012, pp. 1097–1105. [Online]. Available: https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
[14]M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” arXiv preprint arXiv:1311.2901, 2013. [Online]. Available: https://arxiv.org/abs/1311.2901
[15]C. Szegedy et al., “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 7–12, 2015, pp. 1–9, doi: 10.1109/CVPR.2015.7298594.
[16]K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [Online]. Available: https://arxiv.org/abs/1409.1556
[17]K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.
[18]A. G. Howard, “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017. [Online]. Available: https://arxiv.org/abs/1704.04861
[19]K. Sanaa and G. Abdu, “Towards effective and efficient online exam systems using deep learning-based cheating detection approach,” Intell. Syst. Appl., vol. 16, p. 200153, 2022, doi: 10.1016/j.iswa.2022.200153.
[20]H. A. Bashar and F. A. Ahmad, “Detecting cheating in electronic exams using the artificial intelligence approach,” Int. J. Mech. Eng., vol. 7, no. 2, 2022. [Online]. Available: https://kalaharijournals.com/resources/FebV7_I2_108.pdf
[21]M. Ramzan et al., “Effectiveness of pre-trained CNN networks for detecting abnormal activities in online exams,” IEEE Access, 2024, doi: 10.1109/ACCESS.2024.3359689.
[22]S. Harish et al., “New features for webcam proctoring using Python and OpenCV,” Revista Gestão Inovação e Tecnologias, vol. 11, no. 2, pp. 1497–1513, 2021, doi: 10.47059/revistageintec.v11i2.1776.
[23]M. Marras et al., “Deep multi-biometric fusion for audio-visual user re-identification and verification,” in Springer Nature Switzerland AG, 2020, pp. 136–157, doi: 10.1007/978-3-030-40014-9_7.
[24]H. Song et al., “Deep user identification model with multiple biometric data,” BMC Bioinf., vol. 21, p. 315, 2020, doi: 10.1186/s12859-020-03613-3.
[25]A. Asep and Y. Bandung, “A design of continuous user verification for online exam proctoring on M-learning,” in Proc. 2019 Int. Conf. Electr. Eng. Informatics (ICEEI), 2019, pp. 284–289, doi: 10.1109/ICEEI47359.2019.8988786.
[26]A. Singh and S. Das, “A cheating detection system in online examinations based on the analysis of eye-gaze and head-pose,” in Proc. Int. Conf. Emerging Trends Artif. Intell. Smart Syst. (THEETAS), Jabalpur, India, Apr. 2022, doi: 10.4108/EAI.16-4-2022.2318165.
[27]A. W. Muzaffar, “An open-source online examination system to meet the integrity demands of e-learning,” J. Comput. Sci., vol. 20, no. 6, pp. 628–640, 2024, doi: 10.3844/jcssp.2024.628.640.
[28]R. S. Amadi et al., “Biometry, encryption and spyware (BES): A multi-factor security and authentication mechanism for JAMB e-examination,” Int. J. Appl. Inf. Syst., vol.. 12, no. 32, pp. 17–26, 2020. [Online]. Available: https://www.ijais.org/archives/volume12/number32/1096-2020451878/
[29]J. Muhammad et al., “CASIA-Face-Africa: A large-scale African face image database,” IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 3634–3646, 2021, doi: 10.1109/TIFS.2021.3080496.
[30]S. S. Farfade, M. J. Saberian, and L.-J. Li, “Multi-view face detection using deep convolutional neural networks,” in Proc. 5th ACM Int. Conf. Multimedia Retrieval (ICMR’15), 2015, pp. 643–650, doi: 10.1145/2671188.2749408.
[31]A. A. Adesokan, “Covid-19 Control: Face Mask Detection Using Deep Learning for Balanced and Unbalanced Dataset,” International Journal of Intelligent Systems and Applications, vol. 14, no. 6, pp. 50–62, 2022.