Comparative Analysis of Explainable AI Frameworks (LIME and SHAP) in Loan Approval Systems

PDF (736KB), PP.60-70

Views: 0 Downloads: 0

Author(s)

Isaac Terngu Adom 1,* Christiana O. Julius 1 Stephen Akuma 1 Samera U. Otor 1

1. Department of Mathematics and Computer Science, Benue State University, Nigeria

* Corresponding author.

DOI: https://doi.org/10.5815/ijieeb.2025.06.05

Received: 22 Nov. 2024 / Revised: 5 Feb. 2025 / Accepted: 23 May 2025 / Published: 8 Dec. 2025

Index Terms

Explainable AI, Interpretability, LIME, loan approval, ML, SHAP

Abstract

Machine learning models that lack transparency can lead to biased conclusions and decisions in automated systems in various domains. To address this issue, explainable AI (XAI) frameworks such as Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) have evolved by offering interpretable insights into machine learning model decisions. A thorough comparison of LIME and SHAP applied to a Random Forest model trained on a loan dataset resulted in an Accuracy of 85%, Precision of 84%, Recall of 97%, and an F1 score of 90%, is presented in this study. This study's primary contributions are as follows: (1) using Shapley values, which represent the contribution of each feature, to show that SHAP provides deeper and more reliable feature attributions than LIME; (2) demonstrating that LIME lacks the sophisticated interpretability of SHAP, despite offering faster and more generalizable explanations across various model types; (3) quantitatively comparing computational efficiency, where LIME displays a faster runtime of 0.1486 seconds using 9.14MB of memory compared to SHAP with a computational time of 0.3784 seconds using memory 1.2 MB. By highlighting the trade-offs between LIME and SHAP in terms of interpretability, computational complexity, and application to various computer systems, this study contributes to the field of XAI. The outcome helps stakeholders better understand and trust AI-driven loan choices, which advances the development of transparent and responsible AI systems in finance.

Cite This Paper

Isaac Terngu Adom, Christiana O. Julius, Stephen Akuma, Samera U. Otor, "Comparative Analysis of Explainable AI Frameworks (LIME and SHAP) in Loan Approval Systems", International Journal of Information Engineering and Electronic Business(IJIEEB), Vol.17, No.6, pp. 60-70, 2025. DOI:10.5815/ijieeb.2025.06.05

Reference

[1]OECD. Artificial Intelligence, Machine Learning and Big Data in Finance: Opportunities, Challenges, and Implications for Policy Makers, https://www.oecd.org/finance/artificial-intelligence-machine-learningbig-data-in-finance.htm, 2021.
[2]V. Hassija, V. Chamola, and A. Mahapatra. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cogn Comput 16, 45–74, 2024.
[3]Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, 51(5), 1–42. 
[4]Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 97 - 101  
[5]Erasmo Purificato, Flavio Lorenzo, Francesca Fallucchi & Ernesto William De Luca (2022): The Use of Responsible Artificial Intelligence Techniques in the Context of Loan Approval Processes, International Journal of Human–Computer Interaction, DOI: 10.1080/10447318.2022.2081284
[6]Diwate, Y., Prashant, Rana, P. S., and Chavan, P. A. (2023). Loan approval prediction using machine learning. International Research Journal of Modernization in Engineering Technology and Science, doi: 10.56726/irjmets39658
[7]Caruana, R., Lou, Y., Gehrke, J., Koch, P. L., Sturm, M., & Elhadad, N. (2015). Intelligible Models for HealthCare. https://doi.org/10.1145/2783258.2788613
[8]Madhav, A.V.S., and Tyagi, A.K. (2023). Explainable Artificial Intelligence (XAI): Connecting Artificial Decision-Making and Human Trust in Autonomous Vehicles. In: Singh, P.K., Wierzchoń, S.T., Tanwar, S., Rodrigues, J.J.P.C., Ganzha, M. (eds) Proceedings of Third International Conference on Computing, Communications, and Cyber-Security. Lecture Notes in Networks and Systems, vol 421. Springer, Singapore. https://doi.org/10.1007/978-981-19-1142-2_10
[9]Wachter, S., Mittelstadt, B., and Floridi, L. (2017). Transparent, explainable, and accountable AI for robotics. Science Robotics. Doi: 10.1126/scirobotics.aan6080
[10]Valina, L., Teixeira, B., Reis, A., Vale, Z., and Pinto, T. (2024) Explainable Artificial Intelligence for Deep Synthetic Data Generation Models. IEEE Conference on Artificial Intelligence (CAI), Singapore, pp. 555-556, doi: 10.1109/CAI59869.2024.00109. 
[11]Baesens, B., Van Gestel, T., Viaene, S., Stepanova, M., Suykens, J. K., & Vanthienen, J. (2003). Benchmarking state-of-the-art classification algorithms for credit scoring. Journal of the Operational Research Society, 54(6), 627–635. Doi:  https://doi.org/10.1057/palgrave.jors.2601545
[12]Wang, F., and Rudin, C. (2015). Falling Rule Lists. In International Conference on Artificial Intelligence and Statistics. pp. 1013–1022. 
[13]Al Islam et al. (2023) Al Islam, F., Saha, A., Bristy, E. J., Islam, Md. R., Afzal, R. and Ridita, S.A. (2023). Lime-based Explainable AI Models for Predicting Disease from Patient’s Symptoms. International Conference on Computing Communication and Networking Technologies. DOI:10.1109/ICCCNT56998.2023.10307223
[14]Lundberg, S. M., & Lee, S. (2017). A unified approach to interpreting model predictions. In Neural Information Processing Systems 30, pp. 4768–4777. 
[15]El Shaw R., Sherif, Y., Al-Mallah, M.H., and Sakr, S. (2020). Interpretability in healthcare: A comparative study of local machine learning interpretability techniques. Computational Intelligence, 37(8).  DOI:10.1111/coin.12410
[16]Aldughayfiq, B., Ashfaq, F., Jhanjhi, N.J. and Humayun, M. (2023). Explainable AI for Retinoblastoma Diagnosis: Interpreting Deep Learning Models with LIME and SHAP, in Diagnostics 13(11). https://doi.org/10.3390/diagnostics13111932
[17]Veerappa, M., Anneken, M., Burkart, N., and Huber, M.  (2021). Validation of XAI Explanations for Multivariate Time Series Classification in the Maritime Domain in Journal of Computational Science. Vol. 58. Doi: 10.1016/j.jocs.2021.101539
[18]Tyagi S. (2022). Analyzing Machine Learning Models for Credit Scoring with Explainable AI and Optimizing Investment Decisions. American International Journal of Business Management (AIJBM) ISSN- 2379-106X Vol. 5 (01).
[19]Gramegna, A., and Giudici, P. (2021). SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk. Frontiers in Artificial Intelligence. Vol. 4.
[20]Ariza-Garzon, M.J., Arroyo, J., Caparrini, A., and Segovia-Vargas, M., 2020. Explainability of a Machine Learning Granting Scoring Model in Peer-to-Peer Lending. IEEE Access. Doi: 10.1109/ACCESS.2020.2984412.
[21]Chen, Y., Calabrese, R., Martin-Barragan, B. (2023). Interpretable Machine Learning for Imbalanced Credit Scoring Datasets, European Journal of Operational Research 312(1), 357-372.