Mohan Kireeti Krovi

Work place: Department of Computer Science and Engineering, Student of Engineering, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada -520007, Andhra Pradesh, India

E-mail: kireetikrovi@gmail.com

Website: https://orcid.org/0009-0008-2750-0626

Research Interests:

Biography

Krovi Mohan Kireeti is currently a Student researcher in the Artificial Intelligence and Data Science program at Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, where he is graduated in 2025, His research interests focus on Internet of things, Artificial Intelligence, Deep Learning and Natural Language Processing. During his academic career, he has been involved in projects such as Enhanced Mental Health Prediction through Emotion Cause Extraction in Client-Therapist using NLP which aids the Therapist to detect the mental health of the people and also another project Spy camera Detection using IR sensor which helps to detect the Hidden cameras. His work has been published at the IEEE 2024 2nd International Conference on Advancement in Computation & Computer Technologies (InCACCT), where he presented his paper on Internet of Things. He is also a member of the VRS ACM SIGKDD Student chapter and the GDSC student chapter.

Author Articles
Multimodal Emotion Recognition Using EEG and Facial Expressions with Potential Applications in Driver Monitoring

By Ch. Raga Madhuri Anideep Seelam Fatima Farheen Shaik Aadi Siva Kartheek Pamarthi Mohan Kireeti Krovi

DOI: https://doi.org/10.5815/ijigsp.2026.01.10, Pub. Date: 8 Feb. 2026

Mental conditions such as fatigue, distraction, and cognitive overload are known to contribute significantly to traffic accidents. Accurate recognition of these cognitive and emotional states is therefore important for the development of intelligent monitoring systems. In this study, a multimodal emotion recognition framework using electroencephalography (EEG) signals and facial expression features is proposed, with potential applications in driver monitoring. The approach integrates Long Short-Term Memory (LSTM) networks and Transformer architectures for EEG-based temporal feature extraction, along with Vision Transformers (ViT) for facial feature representation. Feature-level fusion is employed to combine physiological and visual modalities, enabling improved emotion classification performance compared to unimodal approaches. The model is evaluated using accuracy, precision, recall, and F1-score metrics, achieving an overall accuracy of 96.38%, demonstrating the effectiveness of multimodal learning. Although the experiments are conducted on general-purpose emotion datasets, the results indicate that the proposed framework can serve as a reliable foundation for driver monitoring applications, such as fatigue, distraction, and cognitive state assessment, in intelligent transportation systems.

[...] Read more.
Other Articles