Neny Sulistianingsih

Work place: Department of Engineering, Universitas Bumigora, Mataram, Indonesia

E-mail: neny.sulistianingsih@universitasbumigora.ac.id

Website: https://orcid.org/0000-0003-0548-5038

Research Interests: Big Data, Data Mining, Machine Learning

Biography

Dr. Neny Sulistianingsih obtained a master's degree in Informatics Engineering (M.Kom) from the Universitas Islam Indonesia, Yogyakarta. She has just completed her doctoral education at Universitas Gadjah Mada, Yogyakarta. Her research interests include data mining, sentiment analysis, big data, text mining, graph mining, and machine learning. She is currently a lecturer who teaches at Universitas Bumigora, Mataram. 

Author Articles
Leveraging Convolutional Neural Network to Enhance the Performance of Ensemble Learning in Scientific Article Classification

By I Nyoman Switrayana Neny Sulistianingsih

DOI: https://doi.org/10.5815/ijmecs.2025.06.10, Pub. Date: 8 Dec. 2025

The classification of scientific articles faces challenges due to the complexity and diversity of academic content. In response to this issue, a new approach is proposed, utilizing Ensemble Learning, specifically Decision Tree, Random Forest, AdaBoost, and XGBoost, along with Convolutional Neural Network (CNN) techniques. This study utilizes the arXiv dataset, comparing the effectiveness of Term Frequency-Inverse Document Frequency (TFIDF) and Sentence-BERT (SBERT) for text representation. To further refine feature extraction, vectors derived from SBERT are integrated into the CNN framework for dimensionality reduction and obtaining more representative feature maps named latent feature vectors. The study also observes the impact of incorporating both the title and abstract on performance, demonstrating that richer textual information enhances model accuracy. The hybrid model (CNN + Ensemble Learning) demonstrates a substantial improvement in classification accuracy compared to traditional Ensemble Learning. The evaluation shows that CNN + SBERT with XGBoost achieved the highest accuracy of 94.62%, showcasing the benefits of combining advanced feature extraction techniques with powerful models. This research emphasizes the potential of integrating CNN within the Ensemble Learning paradigm to enhance the performance of scientific article classification and provides insights into the crucial role of CNN in improving model accuracy. Additionally, the study highlights the superior performance of SBERT in feature extraction, contributing beneficially to the overall model. 

[...] Read more.
Enhancing Sentiment Analysis for the 2024 Indonesia Election Using SMOTE-Tomek Links and Binary Logistic Regression

By Neny Sulistianingsih I Nyoman Switrayana

DOI: https://doi.org/10.5815/ijeme.2024.03.03, Pub. Date: 8 Jun. 2024

The Indonesian Election is one of the most anticipated political contestations among the Indonesian people. Mainly because the results of the Indonesian Election are leaders in Indonesia ranging from governors and legislative members to the president and vice president of Indonesia, who will lead the next five years, considering the importance of the five-year agenda, the dissemination of good information about work programs, the activities of prospective leaders who will elect in the 2024 election and various news stories are starting to spread on Twitter. Based on this, this research aims to analyze public sentiment on Twitter wa The research method used is SMOTE-Tomek Links to overcome imbalanced data. In contrast, sentiment analysis uses Binary Logistic Regression. Evaluation related to this model measures accuracy and ROC Curves. The evaluation results show that the SMOTE-Tomek Links method is less than optimal for the data used in the research, namely the 2024 election data, with an accuracy value of 0.581 for training data and 0.406 for testing data. Undersampling methods such as Tomek Links and Random (undersampling) show higher values when combined with Binary Logistic Regression in analyzing the sentiment produced in this study, namely 0.983 and 0.938 for the Tomek Links method and 0.964 and 0.902 for the Random (undersampling) method, respectively -each for training and testing data.

[...] Read more.
Other Articles