Evaluating and Comparing Size, Complexity and Coupling Metrics as Web Applications Vulnerabilities Predictors

Full Text (PDF, 573KB), PP.35-42

Views: 0 Downloads: 0


Mohammed Zagane 1,* Mustapha Kamel Abdi 1

1. University of Oran 1 Ahmed Ben Bella, Oran, Algeria

* Corresponding author.

DOI: https://doi.org/10.5815/ijitcs.2019.07.05

Received: 20 Mar. 2019 / Revised: 12 Apr. 2019 / Accepted: 21 Apr. 2019 / Published: 8 Jul. 2019

Index Terms

Software Vulnerability, Web Application Security, Information Privacy, Code Metrics, Prediction Models, Machine Learning, Software Engineering


Most security and privacy issues in software are related to exploiting code vulnerabilities. Many studies have tried to find the correlation between the software characteristics (complexity, coupling, etc.) quantified by corresponding code metrics and its vulnerabilities and to propose automatic prediction models that help developers locate vulnerable components to minimize maintenance costs. The results obtained by these studies cannot be applied directly to web applications because a web application differs in many ways from a non-web application: development, use, etc. and a lot of evaluation of these conclusions has to be made. The purpose of this study is to evaluate and compare the vulnerabilities prediction power of three types of code metrics in web applications.  There are a few similar studies that targeted non-web application and to the best of our knowledge, there are no similar studies that targeted web applications. The results obtained show that unlike non-web applications where complexity metrics have better vulnerability prediction power, in web applications the metrics that give better prediction are the coupling metrics with high recall (> 75%) and fewer costs in terms of inspection (<25%).

Cite This Paper

Mohammed Zagane, Mustapha Kamel Abdi, "Evaluating and Comparing Size, Complexity and Coupling Metrics as Web Applications Vulnerabilities Predictors", International Journal of Information Technology and Computer Science(IJITCS), Vol.11, No.7, pp.35-42, 2019. DOI:10.5815/ijitcs.2019.07.05


[1]S. Zhang, D. Caragea, and X. Ou, “An empirical study on using the national vulnerability database to predict software vulnerabilities,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 6860 LNCS, no. PART 1, pp. 217–231, 2011.

[2]J. Walden, J. Stuckman, and R. Scandariato, “Predicting vulnerable components: Software metrics vs text mining,” Proc. - Int. Symp. Softw. Reliab. Eng. ISSRE, pp. 23–33, 2014.

[3]M. Alenezi and I. Abunadi, “Evaluating software metrics as predictors of software vulnerabilities,” Int. J. Secur. its Appl., vol. 9, no. 10, pp. 231–240, 2015.

[4]I. Abunadi and M. Alenezi, “Towards Cross Project Vulnerability Prediction in Open Source Web Applications,” in Proceedings of the The International Conference on Engineering & MIS 2015 - ICEMIS ’15, 2015, pp. 1–5.

[5]S. Moshtari and A. Sami, “Evaluating and comparing complexity, coupling and a new proposed set of coupling metrics in cross-project vulnerability prediction,” in Proceedings of the 31st Annual ACM Symposium on Applied Computing - SAC ’16, 2016, pp. 1415–1421.

[6]B. Turhan, A. Bener, and T. Menzies, “Nearest neighbor sampling for cross company defect predictors,” in Proceedings of the 1st International Workshop on Defects in Large Software Systems (DEFECTS’08), 2008, p. 26.

[7]B. Turhan, G. Kocak, and A. Bener, “Data mining source code for locating software bugs: A case study in telecommunication industry,” Expert Syst. Appl., vol. 36, no. 6, pp. 9986–9990, 2009.

[8]K. Gao, T. M. Khoshgoftaar, H. Wang, and N. Seliya, “Choosing software metrics for defect prediction: an investigation on feature selection techniques,” Softw. Pract. Exp., vol. 41, no. 5, pp. 579–606, Apr. 2011.

[9]T. Menzies, J. Greenwald, and A. Frank, “Data Mining Static Code Attributes to Learn Defect Predictors,” IEEE Trans. Softw. Eng., vol. 33, no. 1, pp. 2–14, 2007. 

[10]H. Watson, T. J. McCabe, and D. R. Wallace, “Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric,” NIST Spec. Publ., pp. 1–114, 1996.

[11]V. Y. Shen, S. D. Conte, and H. E. Dunsmore, “Software Science Revisited: A Critical Analysis of the Theory and Its Empirical Support,” IEEE Trans. Softw. Eng., vol. SE-9, no. 2, pp. 155–165, 1983.

[12]“Promise software engineering repository.” [online]http://promise.site.uottawa.ca/SERepository/datasets-page.html (Accessed 20 July 2018).

[13]P. Morrison, K. Herzig, B. Murphy, and L. Williams, “Challenges with applying vulnerability prediction models,” in Proceedings of the 2015 Symposium and Bootcamp on the Science of Security - HotSoS ’15, 2015, vol. 14, no. 2, pp. 1–9.

[14]C. Catal, A. Akbulut, E. Ekenoglu, and M. Alemdaroglu, “Development of a Software Vulnerability Prediction Web Service Based on Artificial Neural Networks,” in Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2017, pp. 59–67.

[15]Y. Shin, A. Meneely, L. Williams, and J. A. Osborne, “Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities,” IEEE Trans. Softw. Eng., vol. 37, no. 6, pp. 772–787, 2011.

[16]M. Siavvas, E. Gelenbe, D. Kehagias, and D. Tzovaras, “Static analysis-based approaches for secure software development,” Commun. Comput. Inf. Sci., vol. 821, no. April, pp. 142–157, 2018.

[17]Y. Shin and L. Williams, “An empirical model to predict security vulnerabilities using code complexity metrics,” in Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement - ESEM ’08, 2008, p. 315.

[18]A. Nakra, “Comparative Analysis of Bayes Net Classifier , Naive Bayes Classifier and Combination of both Classifiers using WEKA,” I.J. Inf. Technol. Comput. Sci., vol. 11, no. March, pp. 38–45, 2019.

[19]G. Holmes, A. Donkin, and I. H. Witten, “WEKA: a machine learning workbench,” in Proceedings of ANZIIS ’94 - Australian New Zealnd Intelligent Information Systems Conference, pp. 357–361.

[20]R. Ihaka and R. Gentleman, “R: A Language for Data Analysis and Graphics,” J. Comput. Graph. Stat., vol. 5, no. 3, p. 299, Sep. 1996.

[21]“Vulnerability dataset.” [online] http://seam.cs.umd.edu/webvuldata (Accessed 01 July 2018).