Work place: Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Parit Raja, Malaysia
E-mail: ysuhaila@uthm.edu.my
Website: https://orcid.org/0000-0002-5543-8259
Research Interests:
Biography
Suhaila Mohd. Yasin received a B.Sc. degree in science (computing) and an M.Sc. in computer science from Universiti Teknologi Malaysia, Skudai, and a Ph.D. in computer science from The University of Queensland, Australia, in 2020. She is currently a senior lecturer with the Department of Software Engineering and the Software Testing Focus Group leader and principal researcher at Universiti Tun Hussein Onn Malaysia (UTHM). As a rising researcher, she supervises postgraduate students and has authored or co-authored articles in software engineering, modeling, development, and testing.
By Muhammad Ihsan Zul Suhaila Mohd. Yasin Ivan Chatisa Fikri Muhaffizh Imani Siti Syahidatul Helma Dadang Syarif Sihabudin Sahid
DOI: https://doi.org/10.5815/ijmecs.2026.02.05, Pub. Date: 8 Apr. 2026
User stories are essential in agile software development for capturing software requirements, yet concerns over their quality persist globally. While prior studies have evaluated user story quality using practitioners and artificial intelligence, they primarily focus on general settings. This study addresses a gap by evaluating the quality of student-generated user stories in an educational context, specifically in Indonesia. The objective of this study is to compare evaluations by human evaluators and ChatGPT using the Quality User Story (QUS) Framework and evaluate the quality of the student-generated user story compared to the global studies. A total of 951 user stories from 103 student software projects were analyzed. Evaluations were conducted by three human evaluators and ChatGPT (GPT-4o). Percentage Agreement and Cohen’s Kappa measured inter-rater agreement, while the McNemar Test assessed statistical significance, and effect sizes were examined using Cohen’s g. Results show generally high agreement between human and ChatGPT evaluations, but lower consistency in several criteria, such as Conceptually Sound, Independent, and Unambiguous. Only four of the thirteen criteria—Conflict-Free, Unique, Well-Formed, and Atomic—showed no significant differences. Most criteria showed small to medium effect sizes, whereas Complete exhibited a large practical difference. Common quality issues among students included Uniform, Independent, and Complete (set criteria), Atomic, Conceptually Sound, and Unambiguous (individual criteria), with overlap observed in global studies. This study shows that ChatGPT can support user story evaluation in educational settings when guided by clear rubrics and validated by humans. It also offers practical insights for educators by identifying criteria that require stronger emphasis in teaching, particularly in software engineering education in Indonesia.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals