Work place: Coimbra Institute of Engineering, Polytechnic University of Coimbra, Portugal
E-mail: jorge@isec.pt
Website: https://orcid.org/0000-0001-9660-2011
Research Interests:
Biography
Jorge F. R. Bernardino, PhD, is a Professor at the Institute of Engineering of the Polytechnic University of Coimbra, Portugal, where he has been teaching since 1994. He received a PhD degree from the University of Coimbra in 2002. In 2014, he was a Visiting Professor at CMU. He was the Director of the Applied Research Institute (i2A), IPC 2019-2021. He has authored more than 200 publications in referred conferences and journals and participated in several projects. His research interests include big data, NoSQL, data warehousing, dependability, the Internet of Things, and software engineering.
By Joao P. J. Pires Jorge F. R. Bernardino Anabela J. Gomes Ana Rosa P. Borges Fernanda M. R. Brito R. Correia
DOI: https://doi.org/10.5815/ijmecs.2026.01.01, Pub. Date: 8 Feb. 2026
Analyzing student performance in Introductory Programming courses in Higher Education is crucial for early intervention and improved academic outcomes. This study investigates the predictive potential of a Programming Cognitive Test in assessing student aptitude and forecasting success in an Introductory Programming course. Data was collected from 180 students, both freshmen and repeating students, enrolled in a Computer Engineering program. The dataset includes the Programming Cognitive test results, background variables, and final course outcomes. To identify latent patterns within the data, the K-means clustering algorithm was applied, focusing particularly on freshmen students to avoid bias from prior programming exposure. In parallel, six Machine Learning classification models were developed and evaluated to predict students’ likelihood of passing the Introductory Programming course: Decision Tree, K-Nearest Neighbor, Naïve Bayes, Random Forest, Support Vector Machine, and Deep Neural Network. Among these, the Deep Neural Network model demonstrated superior performance, achieving the highest values across key metrics—Accuracy, Recall, and F1-score—effectively identifying students at risk of underperformance. These findings underscore the potential of this model in educational settings, where timely and accurate detection of struggling students can enable proactive, targeted interventions.
This work contributes to the field by combining cognitive assessment with predictive modelling, offering a novel approach to forecasting programming performance. The models and methods described are adaptable for broader educational applications and may assist educators in refining teaching strategies and improving retention and success rates in programming education.
Subscribe to receive issue release notifications and newsletters from MECS Press journals