Work place: Department of Computer Science and Engineering, Student of Engineering, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada -520007, Andhra Pradesh, India
E-mail: aadipamarthi8055@gmail.com
Website: https://orcid.org/0009-0008-6277-3662
Research Interests:
Biography
Aadi Siva Kartheek Pamarthi is a student researcher in the Artificial Intelligence and Data Science program at Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, where he is graduated in 2025. His research interests span deep learning, machine learning, artificial intelligence, and natural language processing. Notably, he has worked on projects such as "Sleep Apnea Prediction Using Real-Time ECG Data," which aims to provide early detection through ECG analysis. This approach offers a cost-effective and scalable alternative to traditional Polysomnography, enabling early intervention. His research has been published at the IEEE 2024 Conference on Intelligent Systems and Machine Learning (ISML).
By Ch. Raga Madhuri Anideep Seelam Fatima Farheen Shaik Aadi Siva Kartheek Pamarthi Mohan Kireeti Krovi
DOI: https://doi.org/10.5815/ijigsp.2026.01.10, Pub. Date: 8 Feb. 2026
Mental conditions such as fatigue, distraction, and cognitive overload are known to contribute significantly to traffic accidents. Accurate recognition of these cognitive and emotional states is therefore important for the development of intelligent monitoring systems. In this study, a multimodal emotion recognition framework using electroencephalography (EEG) signals and facial expression features is proposed, with potential applications in driver monitoring. The approach integrates Long Short-Term Memory (LSTM) networks and Transformer architectures for EEG-based temporal feature extraction, along with Vision Transformers (ViT) for facial feature representation. Feature-level fusion is employed to combine physiological and visual modalities, enabling improved emotion classification performance compared to unimodal approaches. The model is evaluated using accuracy, precision, recall, and F1-score metrics, achieving an overall accuracy of 96.38%, demonstrating the effectiveness of multimodal learning. Although the experiments are conducted on general-purpose emotion datasets, the results indicate that the proposed framework can serve as a reliable foundation for driver monitoring applications, such as fatigue, distraction, and cognitive state assessment, in intelligent transportation systems.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals