Putu Kerti Nitiasih

Work place: Department of English Education, Ganesha University of Education (Undiksha), Bali, Indonesia

E-mail: kertinitiasih@undiksha.ac.id

Website: https://orcid.org/0000-0003-4016-0757

Research Interests:

Biography

Putu Kerti Nitiasih received the B.A. degree in English language education, the M.A. degree in linguistics, and the doctoral degree in applied linguistics from Udayana University, Denpasar, Indonesia, in 1985, 2002, and 2007, respectively. Her major field of study is applied linguistics and English language education. She is currently a Professor with the Department of English Education, Ganesha University of Education (Undiksha), Bali, Indonesia. She has published research in national and international journals and authored the book Bilingualism and Bilingual Education. Her research interests include applied linguistics, sociolinguistics, bilingual education, and English language teaching. Prof. Nitiasih has been actively involved in research, teacher training, and community service programs related to English language education.

Author Articles
Multimodal Assessment of Student Engagement by Fusing EEG, Facial Expressions, and Body Posture in an Offline Classroom

By Min Song I Gusti Putu Sudiarta Putu Kerti Nitiasih Putu Nanci Riastini Zhang Wang Junyi Chai

DOI: https://doi.org/10.5815/ijmecs.2026.03.12, Pub. Date: 8 Jun. 2026

An accurate and comprehensive assessment of student engagement in classrooms is crucial for enabling data-driven teaching and personalized education. Current approaches primarily rely on teacher observation or student self-reports, which are often subjective, delayed, and unable to capture cognitive engagement. To address these limitations, this study proposes a Multimodal Cognitive-Attention Fusion (MCA Fusion) framework, grounded in Fredricks’ three-dimensional engagement model.  The framework integrates electroencephalography (EEG), facial expressions, and body posture to simultaneously quantify cognitive, emotional, and behavioral engagement.  Built on a Transformer architecture, it employs self-attention to extract temporal features within each modality and introduces a cognition-guided cross-attention mechanism to dynamically integrate multimodal signals. To validate the framework, experiments were conducted with 36 undergraduate students in real classroom settings. The results demonstrate that our framework significantly outperforms all single-modality baselines, achieving an accuracy of 92% and an F1-score of 94.87%. Compared with the best single-modality model (EEG), the F1-score improves by 34.58 percentage points. Ablation studies further confirm the critical role of the cognitive modality (EEG) and the MCA Fusion mechanism, the removal of which leads to F1-score reductions of 62.58 and 56.16 percentage points, respectively. The proposed approach not only provides a theoretically informed and technically evaluated framework for engagement recognition but also provides a methodological foundation for future closed-loop “perception–assessment–feedback” systems in intelligent learning environments. 

[...] Read more.
Other Articles