Work place: Department of Mathematics and Computer Science, Benue State University, Nigeria
E-mail: sotor@bsum.edu.ng
Website: http://orcid.org/0000-0001-6943-7137
Research Interests:
Biography
Samera Uga Otor is a distinguished academic and researcher, currently serving as a Senior Lecturer at the Benue State University, Makurdi, where she has been involved in teaching and research since 2010. She holds a Ph.D. in Computer Science from Obafemi Awolowo University, Ile-Ife, specializing in cybersecurity and optimization. Dr. Otor has authored several peer-reviewed publications, focusing on network security, intrusion detection models, mobile networks, Artificial intelligence, Data Science and IoT. Additionally, she actively contributes to academic service through roles in program development, student supervision, and committee memberships. She is also a member of professional organizations like Nigeria Women in Information Technology, Nigeria Computer Society and Computer Professional Registration Council of Nigeria.
By Isaac Terngu Adom Christiana O. Julius Stephen Akuma Samera U. Otor
DOI: https://doi.org/10.5815/ijieeb.2025.06.05, Pub. Date: 8 Dec. 2025
Machine learning models that lack transparency can lead to biased conclusions and decisions in automated systems in various domains. To address this issue, explainable AI (XAI) frameworks such as Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) have evolved by offering interpretable insights into machine learning model decisions. A thorough comparison of LIME and SHAP applied to a Random Forest model trained on a loan dataset resulted in an Accuracy of 85%, Precision of 84%, Recall of 97%, and an F1 score of 90%, is presented in this study. This study's primary contributions are as follows: (1) using Shapley values, which represent the contribution of each feature, to show that SHAP provides deeper and more reliable feature attributions than LIME; (2) demonstrating that LIME lacks the sophisticated interpretability of SHAP, despite offering faster and more generalizable explanations across various model types; (3) quantitatively comparing computational efficiency, where LIME displays a faster runtime of 0.1486 seconds using 9.14MB of memory compared to SHAP with a computational time of 0.3784 seconds using memory 1.2 MB. By highlighting the trade-offs between LIME and SHAP in terms of interpretability, computational complexity, and application to various computer systems, this study contributes to the field of XAI. The outcome helps stakeholders better understand and trust AI-driven loan choices, which advances the development of transparent and responsible AI systems in finance.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals