Work place: Department of CSE, Seshadri Rao Gudlavalleru Engineering College, Gudlavalleru, Andhra Pradesh, India
E-mail: gutta.prasad1@gmail.com
Website: https://orcid.org/0000-0003-0679-7324
Research Interests:
Biography
Dr. GVSNRV. Prasad, M.S, M.Tech, Ph.D., is a Professor of Computer Science and Engineering and Director (PGDCA) at Seshadri Rao Gudlavalleru Engineering College. He earned his Ph.D. in Computer Science and Engineering from JNTUK, Kakinada in the year 2012. With 35 years of teaching experience, he published 39 Journal articles in various reputed National and International journals, 19 research articles in various National and International conferences, published 2 book chapters and guided five research scholars. His areas of interest include Machine Learning, Deep Learning, and Data Mining.
By Vineela Krishna. Suri Prasad. GVSNRV
DOI: https://doi.org/10.5815/ijmecs.2026.02.08, Pub. Date: 8 Apr. 2026
The widespread distribution of fake news poses a critical societal challenge by influencing public opinion and shaping political discourse. Addressing this problem requires models that can capture multimodal cues beyond text alone. This work proposes a lightweight Multimodal Cross-attention Fusion–based Fake News Detection (MCAF-FND) model which combines textual and visual features through cross-attention strategy. The study evaluates MCAF-FND on the Fakeddit benchmark, a large-scale dataset comprising 682,996 multimodal samples collected from social media. Textual features are extracted using DistilBERT, while spatially aware image representations are derived from VGG-19 convolutional layers. The cross-attention module enables semantic alignment between text tokens and image patches, modeling inter-modal dependencies more effectively than conventional fusion strategies. The fused representation is classified using a Multilayer Perceptron(MLP) with softmax, ensuring contributions from both modalities. Experimental results demonstrate that MCAF-FND consistently outperforms unimodal baselines and traditional fusion methods, achieving 93.2% accuracy with strong precision, recall, and F1-score. Cross-attention based visualizations illustrate how the model aligns textual cues with salient visual regions, enhancing interpretability. By combining computational efficiency with robust multimodal reasoning, the proposed approach provides a reliable and extensible solution for automated fake news detection.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals