Work place: GITAM Deemed to be University, Visakhapatnam, India
E-mail: sjavvadi2@gitam.edu
Website: https://orcid.org/0000-0003-2897-2402
Research Interests:
Biography
J. N. V. R. Swarup Kumar, Assistant Professor in the Department of Computer Science and Engineering at GITAM University, Visakhapatnam, Andhra Pradesh, India. Research areas encompass the Internet of Things (IoT), Vehicular Ad-hoc Networks (VANETs), Cloud Computing, and Data Analytics. Contributed to the field through publications like Improving realism in face swapping using deep learning and K-means clustering, which integrates advanced technologies to enhance digital imaging techniques.
By Hemanth Kumar Tummalapalli G. Kamal Y. V. Naga Kumari J. N. V. R. Swarup Kumar Y. Chitra Rekha
DOI: https://doi.org/10.5815/ijeme.2025.06.02, Pub. Date: 8 Dec. 2025
This study provides insight into how machine learning methods, in particular k-means clustering algorithm could contribute to greater degree of employee engagement in the businesses. Using Work-Life Balance, Environment Satisfaction and Job Satisfaction found in employee survey data as an illustrative lens of the engagement phenomenon, patterns are identified that differ from traditional perspectives with implications for organizational actions. The study categorizes workers in clusters and identifies the significant gaps of satisfaction among them, using k-means clustering. Logistic regression analysis is used for the prediction of attrition risk, which also helps in determining factors responsible behind employee retention. The findings reveal the importance of understanding such facilitators to generate targeted interventions and strategies that foster a positive work environment and improve organisational performance. This approach ensures less attrition risks, and better job satisfaction leading to greater overall organisation productivity / wellbeing.
[...] Read more.By Ravi Teja Gedela J. N. V. R. Swarup Kumar Venkateswararao Kuna Sasibhushana Rao Pappu
DOI: https://doi.org/10.5815/ijmecs.2025.06.08, Pub. Date: 8 Dec. 2025
Sarcasm, a subtle form of expression, is challenging to detect, especially in modern communication platforms where communication transcends text to encompass videos, images, and audio. Traditional sarcasm detection methods rely solely on textual data and often struggle to capture the nuanced emotional inconsistencies inherent in sarcastic remarks. To overcome these shortcomings, this paper introduces a novel multimodal framework incorporating text, audio, and emoji data for more effective sarcasm detection and emotion classification. A key component of this framework is the Contextualized Semantic Self-Guided BERT (CS-SGBERT) model, which generates efficient word embeddings. Primarily, frequency spectral analysis is performed on the audio data, followed by preprocessing and feature extraction, while text data undergoes preprocessing to extract lexicon and irony features. Meanwhile, emojis are analyzed for polarity scores, which provide a rich set of multimodal features. The fused features are then optimized using the Camberra-based Dingo Optimization Algorithm (C-DOA). The selected features and the embedded words from the preprocessed texts are given to Entropy-based Robust Scaling - Gated Recurrent Units (E-RS-GRU) for detecting sarcasm. Experimental results on the MUStARD dataset show that the proposed E-RS-GRU model achieves an accuracy of 76.65% and F1-score of 76.9%, with a relative improvement of 2.18% over the best-performing baseline and 1.25% over the best-performing state-of-the-art model. Additionally, KLKI-Fuzzy model is proposed for emotion recognition, which dynamically adjusts membership functions through Kullback-Leibler Kriging Interpolation (KLKI), enhancing emotion classification by processing features from all modalities. The KLKI-Fuzzy model exhibits enhanced emotion recognition performance with reduced fuzzification and defuzzification times.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals