Fatima Farheen Shaik

Work place: Department of Computer Science and Engineering, Student of Engineering, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada -520007, Andhra Pradesh, India

E-mail: fatimafarheen2605@gmail.com

Website: https://orcid.org/0009-0000-7919-3092

Research Interests:

Biography

Shaik Fatima Farheen is a student researcher in the Artificial Intelligence and Data Science program at Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, where she is graduated in 2025. Her research interests lie in Machine Learning, Deep Learning and artificial intelligence. Her work has been published at the IEEE 2024 2nd International Conference on Advancement in Computation & Computer Technologies (InCACCT), where she presented her paper on Internet of Things.

Author Articles
Multimodal Emotion Recognition Using EEG and Facial Expressions with Potential Applications in Driver Monitoring

By Ch. Raga Madhuri Anideep Seelam Fatima Farheen Shaik Aadi Siva Kartheek Pamarthi Mohan Kireeti Krovi

DOI: https://doi.org/10.5815/ijigsp.2026.01.10, Pub. Date: 8 Feb. 2026

Mental conditions such as fatigue, distraction, and cognitive overload are known to contribute significantly to traffic accidents. Accurate recognition of these cognitive and emotional states is therefore important for the development of intelligent monitoring systems. In this study, a multimodal emotion recognition framework using electroencephalography (EEG) signals and facial expression features is proposed, with potential applications in driver monitoring. The approach integrates Long Short-Term Memory (LSTM) networks and Transformer architectures for EEG-based temporal feature extraction, along with Vision Transformers (ViT) for facial feature representation. Feature-level fusion is employed to combine physiological and visual modalities, enabling improved emotion classification performance compared to unimodal approaches. The model is evaluated using accuracy, precision, recall, and F1-score metrics, achieving an overall accuracy of 96.38%, demonstrating the effectiveness of multimodal learning. Although the experiments are conducted on general-purpose emotion datasets, the results indicate that the proposed framework can serve as a reliable foundation for driver monitoring applications, such as fatigue, distraction, and cognitive state assessment, in intelligent transportation systems.

[...] Read more.
Other Articles