Work place: Department of Mathematics and Computer Science, Benue State University, Nigeria
E-mail: iadom@bsum.edu.ng
Website: https://orcid.org/0000-0003-1389-1255
Research Interests:
Biography
Isaac Terngu Adom holds an MSc. in Computer Science from the University of Bonn, Germany, and a BSc. in Computer Science with First Class Honors from Ahmadu Bello University, Zaria Nigeria. He is currently working as a University Lecturer and Researcher at the Department of Mathematics and Computer Science, Benue State University, Makurdi, Nigeria. His research interest is in Data Science, Data Mining, Artificial Intelligence, Knowledge Representation, and Explainable Artificial Intelligence. A member of the Nigeria Computer Society (NCS), Isaac has many years of industry, teaching, and research experience where he has worked on many projects, taught Computer Science courses, and currently supervising undergraduate research projects. An assiduous academic who is committed to imparting and creating knowledge, Isaac has published papers in reputable journals and presented at conferences. Isaac has six (6) accolades from different affiliations.
By Isaac Terngu Adom Christiana O. Julius Stephen Akuma Samera U. Otor
DOI: https://doi.org/10.5815/ijieeb.2025.06.05, Pub. Date: 8 Dec. 2025
Machine learning models that lack transparency can lead to biased conclusions and decisions in automated systems in various domains. To address this issue, explainable AI (XAI) frameworks such as Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) have evolved by offering interpretable insights into machine learning model decisions. A thorough comparison of LIME and SHAP applied to a Random Forest model trained on a loan dataset resulted in an Accuracy of 85%, Precision of 84%, Recall of 97%, and an F1 score of 90%, is presented in this study. This study's primary contributions are as follows: (1) using Shapley values, which represent the contribution of each feature, to show that SHAP provides deeper and more reliable feature attributions than LIME; (2) demonstrating that LIME lacks the sophisticated interpretability of SHAP, despite offering faster and more generalizable explanations across various model types; (3) quantitatively comparing computational efficiency, where LIME displays a faster runtime of 0.1486 seconds using 9.14MB of memory compared to SHAP with a computational time of 0.3784 seconds using memory 1.2 MB. By highlighting the trade-offs between LIME and SHAP in terms of interpretability, computational complexity, and application to various computer systems, this study contributes to the field of XAI. The outcome helps stakeholders better understand and trust AI-driven loan choices, which advances the development of transparent and responsible AI systems in finance.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals