ISSN: 2074-904X (Print)
ISSN: 2074-9058 (Online)
DOI: https://doi.org/10.5815/ijisa
Website: https://www.mecs-press.org/ijisa
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 142
IJISA is committed to bridge the theory and practice of intelligent systems. From innovative ideas to specific algorithms and full system implementations, IJISA publishes original, peer-reviewed, and high quality articles in the areas of intelligent systems. IJISA is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of intelligent systems and applications.
IJISA has been abstracted or indexed by several world class databases: Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJISA Vol. 18, No. 2, Apr. 2026
REGULAR PAPERS
Community engagement is essential to social service delivery, yet traditional community needs assessment remains time-consuming and poorly suited for timely monitoring. This study proposes a semi-supervised learning framework to identify emerging community needs and service gaps from massive, mostly unlabeled, unstructured text. We construct an explicit heterogeneous text graph where each record is a document node linked to keyword and need-category nodes; document–document edges are built using a weighted combination of semantic similarity (BERT cosine), lexical overlap (keyword Jaccard), and temporal proximity. A graph neural network with iterative self-training leverages 3% expert-labeled seed data and the remaining unlabeled corpus to classify records into a 10-category need taxonomy. On 176,602 records, the proposed model achieves F1 = 0.895 and Recall = 0.899, outperforming supervised baselines trained on the same labeled ratio by 23.8% (macro-F1). Post-hoc quarterly aggregation of predictions enables trend monitoring and prioritization of service-gap severity for decision support.
[...] Read more.The rapid rise of mobile technology paired with the steady growth of the internet, has led to a massive increase in the amount of user generated content, such as online consumer reviews, accessible through the browser. As the volume of user-generated content continues to rise, it becomes increasingly important to develop sophisticated methods for performing sentiment analysis on the texts collected from users, especially those that have been generated in relation to restaurants and similar types of service establishments. In this paper, we will present a new approach to sentiment analysis which incorporates Latent Dirichlet Allocation topic models, Term Frequency- Inverse Document Frequency vector representations and XGBoost Classifiers into a unified framework. Unlike conventional implementations, this study integrates probabilistic topic distributions from LDA with multi-level n-gram TF-IDF features and evaluates their combined impact using XGBoost for enhanced classification performance. Using three distinct n-gram levels (unigrams, bigrams, and trigrams), we will evaluate various aspects of text-based data including common linguistic patterns and sentiment trends. Higher-order n-grams were included to capture contextual dependencies beyond single-word features. Overall, our results demonstrate that the performance of our proposed framework is superior to traditional corpus-based models on multiple evaluation metrics, including: classification accuracy 96.07%, classification sensitivity 95.43%, classification specificity 97.12% and F1-Score 96.16%.
[...] Read more.Forecasting time series data especially in volatile sectors like financial markets, shows significant challenges due to non-linearity, non-stationarity and noise in the data. Traditional forecasting models most likely fail to generalize effectively across varying tasks without extensive retraining. This study investigates the application of meta learning techniques, particularly First-Order Model-Agnostic Meta-Learning (FOMAML) and Reptile, to make adaptability and generalization better in time series forecasting tasks. An extensive empirical study was done using three neural networks as base models, namely Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU) and Feed Forward Neural Network (FFNN) applied to four real-world stocks: TCS, TATASTEEL, GRASIM and DJIAHD. The models were evaluated under few-shot learning(defined here as 211-shot learning using sliding window samples) conditions with varying iteration counts(outer loops or epochs) and their effectiveness was checked using some common standard metrics like RMSE(Root Mean Squared Error), MAE(Mean Absolute Error) and R²(Coefficient of Determination). Outcomes have shown that meta-learning approach notably performs much better than traditional models with MAML(First Order) in particular showing quicker task adaptation as well as stable convergence behavior, especially when it used with GRU and LSTM as base models, as validated empirically on the GRASIM dataset where the MAML with LSTM configuration attained around 81.9\% reduction in RMSE (dropping the value from 622.94 to 112.60 over the iterations). In all four stocks, reptile shows relatively steady performance. The study validates the potential of meta-learning as a powerful framework for time series forecasting problem in dynamic settings which offers robust algorithmic foundation for numerous future financial modeling applications.
[...] Read more.It is well known that diagnosing Alzheimer's disease (AD) accurately and early is a major clinical challenge, especially when using brain MRI data to differentiate between subtle stages of cognitive decline. This study investigated the efficacy of two deep learning models for the classification of AD stages: Vision Transformer (ViT), a transformer-based architecture, and EfficientNetB7, a convolutional neural network. To enhance classification performance and address class imbalance, extensive data preprocessing and augmentation techniques were employed on the publicly accessible 'Alzheimer’s Dataset (4 class of Images)' from Kaggle. This dataset comprises 6,400 brain MRI images categorized into four AD stages: Non-Demented, Very Mild Demented, Mild Demented, and Moderate Demented. Techniques applied included cropping, horizontal and vertical flipping, 20-degree rotations, histogram equalization, Gaussian noise addition, Gaussian blurring, and thresholding, aimed at improving the representation of underrepresented classes. Hyperparameter optimization was executed via a two-phase methodology: an initial grid search to determine parameter ranges, succeeded by Bayesian optimization employing an upper confidence bound acquisition function to refine learning rates, batch sizes, momentum, and weight decay values. Experimental results indicated that EfficientNetB7 attained a classification accuracy of 93.5% with F1-scores surpassing 92% for early-stage classes, whereas Vision Transformer (ViT) recorded a lower accuracy of 88.7% and exhibited diminished sensitivity to early-stage instances. The performance disparity is due to ViT's dependence on extensive training datasets, which may restrict its generalization when utilized on comparatively smaller medical imaging datasets. The results indicate that, in dataset-constrained
scenarios, CNN-based architectures such as EfficientNetB7 may provide more consistent and effective performance. Using distinct training, validation, and test datasets, the model's generalization, training stability, and computational efficiency were assessed. With an intuitive user interface, the top-performing model, EfficientNetB7, was implemented as a web-based application to facilitate real-time supportive predictions for research demonstration. This comparative analysis demonstrated that the CNN-based EfficientNetB7 exhibited more robustness with constrained medical imaging data and was computationally economical, but the transformer-based ViT displayed increased sensitivity to dataset size and necessitated extended training to attain similar convergence. The development of a validated and deployable AI-based Alzheimer's disease diagnostic solution showed great promise for clinical use.
Advances in deep learning have highlighted the need for models tailored for deployment in resource-constrained environments (RCEs), where memory and processing limitations present significant challenges, such as those found in mobile devices, Internet of things (IOT) devices, and embedded systems. This paper introduces GRMobiNet, a novel deep neural network (DNN) model designed to address these challenges in image classification tasks by balancing computational complexity with model accuracy in RCE settings. The model focuses on key performance goals inspired by previous state-of-the-art models, aiming to achieve a better balance between complexity and accuracy. These goals include reducing the model's computational complexity to fewer than 4 million parameters, limiting memory usage to under 16 megabytes, and achieving an accuracy greater than 80%. By meeting these objectives, GRMobiNet enhances both the effectiveness and efficiency of deep neural network deployment in RCE settings. GRMobiNet builds upon MobileNet as its baseline, incorporating advanced techniques such as depthwise separable convolutions, compound scaling, global average pooling, and quantization to optimize performance. Trained on ImageNet-10, a subset of ImageNet-1K, the model underwent rigorous performance evaluation. Experimental results demonstrate that GRMobiNet achieves its performance objectives, with a computational complexity of 3.2 million parameters, memory utilization of 12.6 megabytes, and a prediction accuracy of 92%, validating its suitability for RCEs. This research presents a scalable framework for balancing accuracy and computational efficiency, with significant implications for RCE devices. In future work, GRMobiNet will be tested on commercially available RCE mobile devices using real-world images to assess its practicality and evaluate its performance in terms of accuracy, confidence, and inference time for image classification in real-world scenarios.
[...] Read more.Public transport (PT) users often experience instances of leaving items behind in the public transport system. Finders who come across these items may choose to keep them maliciously or, out of goodwill, decide to return them. This paper aims to utilize six (6) machine learning models, including LR, SVM, DT, RF, NB, and KNN, to predict the ability of finders to return found items. Nine (9) features, comprising four (4) demographic parameters (age, gender, income, and education), were used in the models’ prediction process. The study involved a total of 603 PT users in the Accra cosmopolitan area of Ghana to assess finder’s decision regarding returning found item(s). The classification success rates were obtained as follows: 86.740% (LR), 87.293% (SVM), 82.873% (DT), 85.083% (RF), 85.083% (GNB), and 87.845% (KNN) using Python codes. The RF model also performed well, considering the balance of performance with the desired precision and recall. RF, GNB, and LR achieved the highest AUC values (0.78), demonstrating strong discriminative ability in predicting user honesty.
[...] Read more.Internet addition is a type of mental disorder. It is the result of the excessive internet usage and is concern for the physical and psychological well-being. This paper employs machine learning techniques to understand, evaluate and predict the severity of internet addiction and its impact on health. For this purpose, a real dataset of “Internet Addiction and Mental Health among College Students in Malawi” has been considered. It consists of self-assessed response of 984 university student participants. That includes demographic, behavioral and health-related information. Based on this dataset, two type of relationship have been discussed (1) relation between “demographic features” and “health complexities” and (2) relation between “Internet usage behaviors” and “health complexity”. Next, the key features were identified through comprehensive data analysis. Additionally, there machine learning algorithms namely Backpropagation Neural Network, Random Forest, and C4.5 Decision Tree— were tested to identify ‘internet addiction’ in a subject with the four severity levels (0 to 3). According to results, the Random Forest classifier achieved the highest accuracy of 91%. Additionally, C4.5 algorithm has been used for extracting rules for predicting “Internet addiction” severity level. These rules are demonstrating a relation between “Internet usages pattern” and “Internet addiction severity level”. Additionally, these rules are easy to interpret and can be utilized as a practical tool for self-assessment towards Internet addiction and additionally beneficial for healthcare professionals.
[...] Read more.Electronic devices and internet purchasing are more common today. For online line shopping most of people are using internet banking and credit for doing payment for purchasing. For time saving and various offers on credit card and debit card customer prefer on line shopping like various platform Amazon, Flip cart, big basket etc. For online transaction security is prime concern. There is various type of attack possible during online transaction, stealing of password, fraud transaction, and meet in middle attack etc. During the online transaction stealing confidential information like OTP, transfer money from someone account to another account is a crime. In the digital world fraud during the online transaction day by day increases exponentially. To detect the unauthenticated transaction and fraud during online used various methods. Data is playing very important role during the online fraud. So, knowledge discovery is most frequently used to protect online fraud. In this paper suggested a technique based on knowledge discovery and machine learning methods, we strive to develop the best model possible in this research study to predict transactions involving fraud and transactions involving no fraud. Fraud detection uses a variety of machine learning techniques, including K-Means clustering methods, Support Vector Classifier, Logistic Regression, and Anomaly Detection Algorithm Techniques. After analysis it was found that Anomaly Detection Algorithm Techniques gives best accuracy for fraud detection 99.85%.
[...] Read more.Accurate and immediate incident identification is essential in the cybersecurity area, as it allows the timely detection of threats, along with countermeasures and mitigation, ensuring security for organizations and individuals. This reduces false positives and enables efforts to be concentrated on real risks. This paper presents a framework that integrates ontologies and Large Language Models (LLMs) to identify incidents from events within the context of security threats. Ontology rules are employed to infer probable incidents, resulting in an initial set of incidents for analysis. Furthermore, ontologies provide contextual information, which is combined with event data to formulate queries for LLMs. These interactions with LLMs produce a second set of probable incidents. The outputs from ontol-ogy-based inferences and LLM-driven responses are then compared, and the discrepancies are leveraged to refine ontology rules and adjust LLM responses. Experimental results, focusing on context generation and incident detection, demonstrate that the integration of ontologies and LLMs significantly enhances the accuracy of incident identification when compared to using only LLMs.
[...] Read more.Concept drift is a critical challenge in dynamic environments, where evolving data distributions can abruptly reduce predictive accuracy. Sudden drift requires reliable detection methods that minimize latency and false alarms, yet traditional detectors often depend on labeled data, delaying adaptation and limiting robustness.
This article introduces Neutrosophic Pseudo Labeling Sudden Drift Detection (N PSDD), a novel framework for unsupervised sudden drift detection based on neutrosophic theory. The method integrates neutrosophic clustering for pseudo labeling, block wise neural modeling, drift quantification via neutrosophic mean deviation, and adaptive threshold evaluation. By explicitly modeling truth, indeterminacy, and falsity, N PSDD captures uncertainty regions that conventional probabilistic measures fail to represent.
Experimental validation on synthetic and real world datasets demonstrates that N PSDD achieves competitive dtection latency (MTTD ≈ 23–35 instances), a lower false alarm rate (FAR ≤ 3.1%), a reduced missing drift rate (MDR ≤ 2.5%), and consistently higher G mean values (up to 0.91) than benchmark methods do. For example, on the Poker Hand dataset, N PSDD achieved MCC = 0.846 and accuracy ≈90%, while on electricity it reached MCC = 0.623 with FAR = 3.1%. In contrast, unsupervised baselines (KS WIN, HDD, MMD) yielded higher FAR (≈6–10%) and lower MCC (≤0.56), confirming their limitations in capturing real concept drift.
Overall, the N PSDD enhances the resilience of learning models under non stationary conditions and provides a robust solution for real time applications, including financial forecasting, fraud detection, and adaptive control systems.
Automatically classifying abstract sentences into significant categories such as - background, methods, objective, result, and conclusions - is an essential support tool for scientific medical database querying that assists in searching and summarizing relevant literature works and writing new abstracts. This paper presents a memory-efficient deep learning model for sentence role classification in medical scientific abstracts, achieved by integrating quantized Transformer layers with a Bidirectional Long Short-Term Memory (BiLSTM) network. While the core components are recognized, our contribution is demonstrated in the successful application of quantization to this hybrid architecture, significantly reducing model size (from ~75MB to ~25MB) without a meaningful drop in classification performance on a subset of the PubMed 200k RCT dataset. This makes our approach distinctively practical for deployment in resource-constrained environments, offering an effective tool for automated literature analysis.
[...] Read more.This paper introduces a novel reference model for intelligent longitudinal vehicle control, designed to enhance both safety and passenger comfort. The proposed model dynamically adjusts the follower vehicle’s acceleration based on its penetration distance relative to the lead vehicle, ensuring smooth speed transitions and adaptive deceleration. By preventing abrupt braking, the model maintains a safe inter-vehicle distance while reducing passenger discomfort. Key contributions include an analytical derivation of the follower vehicle’s dynamics and a novel formulation of the safety distance using the Lambert W function, enabling precise parameter optimization. A dedicated optimization framework ensures compliance with safety constraints while minimizing excessive acceleration and jerk. The model’s performance is validated through numerical simulations in various driving scenarios, including emergency braking, steady-speed following, variable-speed adaptation, and stop-and-go traffic. Results demonstrate its effectiveness in maintaining safety while enhancing ride comfort through gradual and controlled deceleration. The proposed approach is computationally efficient and well-suited for Advanced Driver Assistance Systems (ADAS) and autonomous vehicles. Future research will explore its integration with lateral control strategies, real-time adaptability, and machine learning techniques for further performance optimization in dynamic driving environments.
[...] Read more.Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.
[...] Read more.The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
[...] Read more.Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.Addressing scheduling problems with the best graph coloring algorithm has always been very challenging. However, the university timetable scheduling problem can be formulated as a graph coloring problem where courses are represented as vertices and the presence of common students or teachers of the corresponding courses can be represented as edges. After that, the problem stands to color the vertices with lowest possible colors. In order to accomplish this task, the paper presents a comparative study of the use of graph coloring in university timetable scheduling, where five graph coloring algorithms were used: First Fit, Welsh Powell, Largest Degree Ordering, Incidence Degree Ordering, and DSATUR. We have taken the Military Institute of Science and Technology, Bangladesh as a test case. The results show that the Welsh-Powell algorithm and the DSATUR algorithm are the most effective in generating optimal schedules. The study also provides insights into the limitations and advantages of using graph coloring in timetable scheduling and suggests directions for future research with the use of these algorithms.
[...] Read more.The proliferation of Web-enabled devices, including desktops, laptops, tablets, and mobile phones, enables people to communicate, participate and collaborate with each other in various Web communities, viz., forums, social networks, blogs. Simultaneously, the enormous amount of heterogeneous data that is generated by the users of these communities, offers an unprecedented opportunity to create and employ theories & technologies that search and retrieve relevant data from the huge quantity of information available and mine for opinions thereafter. Consequently, Sentiment Analysis which automatically extracts and analyses the subjectivities and sentiments (or polarities) in written text has emerged as an active area of research. This paper previews and reviews the substantial research on the subject of sentiment analysis, expounding its basic terminology, tasks and granularity levels. It further gives an overview of the state- of – art depicting some previous attempts to study sentiment analysis. Its practical and potential applications are also discussed, followed by the issues and challenges that will keep the field dynamic and lively for years to come.
[...] Read more.Climate change, a significant and lasting alteration in global weather patterns, is profoundly impacting the stability and predictability of global temperature regimes. As the world continues to grapple with the far-reaching effects of climate change, accurate and timely temperature predictions have become pivotal to various sectors, including agriculture, energy, public health and many more. Crucially, precise temperature forecasting assists in developing effective climate change mitigation and adaptation strategies. With the advent of machine learning techniques, we now have powerful tools that can learn from vast climatic datasets and provide improved predictive performance. This study delves into the comparison of three such advanced machine learning models—XGBoost, Support Vector Machine (SVM), and Random Forest—in predicting daily maximum and minimum temperatures using a 45-year dataset of Visakhapatnam airport. Each model was rigorously trained and evaluated based on key performance metrics including training loss, Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), R2 score, Mean Absolute Percentage Error (MAPE), and Explained Variance Score. Although there was no clear dominance of a single model across all metrics, SVM and Random Forest showed slightly superior performance on several measures. These findings not only highlight the potential of machine learning techniques in enhancing the accuracy of temperature forecasting but also stress the importance of selecting an appropriate model and performance metrics aligned with the requirements of the task at hand. This research accomplishes a thorough comparative analysis, conducts a rigorous evaluation of the models, highlights the significance of model selection.
[...] Read more.Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.Predicting the student performance is playing vital role in educational sector so that the analysis of student’s status helps to improve for better performance. Applying data mining concepts and algorithms in the field of education is Educational Data Mining. In recent days, Machine learning algorithms are very much useful in almost all the fields. Many researchers used machine learning algorithms only. In this paper we proposed the student performance prediction system using Deep Neural Network. We trained the model and tested with Kaggle dataset using different algorithms such as Decision Tree (C5.0), Naïve Bayes, Random Forest, Support Vector Machine, K-Nearest Neighbor and Deep neural network in R Programming and compared the accuracy of all other algorithms. Among six algorithms Deep Neural Network outperformed with 84% as accuracy.
[...] Read more.Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.Addressing scheduling problems with the best graph coloring algorithm has always been very challenging. However, the university timetable scheduling problem can be formulated as a graph coloring problem where courses are represented as vertices and the presence of common students or teachers of the corresponding courses can be represented as edges. After that, the problem stands to color the vertices with lowest possible colors. In order to accomplish this task, the paper presents a comparative study of the use of graph coloring in university timetable scheduling, where five graph coloring algorithms were used: First Fit, Welsh Powell, Largest Degree Ordering, Incidence Degree Ordering, and DSATUR. We have taken the Military Institute of Science and Technology, Bangladesh as a test case. The results show that the Welsh-Powell algorithm and the DSATUR algorithm are the most effective in generating optimal schedules. The study also provides insights into the limitations and advantages of using graph coloring in timetable scheduling and suggests directions for future research with the use of these algorithms.
[...] Read more.Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.
[...] Read more.Agricultural development is a critical strategy for promoting prosperity and addressing the challenge of feeding nearly 10 billion people by 2050. Plant diseases can significantly impact food production, reducing both quantity and diversity. Therefore, early detection of plant diseases through automatic detection methods based on deep learning can improve food production quality and reduce economic losses. While previous models have been implemented for a single type of plant to ensure high accuracy, they require high-quality images for proper classification and are not effective with low-resolution images. To address these limitations, this paper proposes the use of pre-trained model based on convolutional neural networks (CNN) for plant disease detection. The focus is on fine-tuning the hyperparameters of popular pre-trained model such as EfficientNetV2S, to achieve higher accuracy in detecting plant diseases in lower resolution images, crowded and misleading backgrounds, shadows on leaves, different textures, and changes in brightness. The study utilized the Plant Diseases Dataset, which includes infected and uninfected crop leaves comprising 38 classes. In pursuit of improving the adaptability and robustness of our neural networks, we intentionally exposed them to a deliberately noisy training dataset. This strategic move followed the modification of the Plant Diseases Dataset, tailored to better suit the demands of our training process. Our objective was to enhance the network's ability to generalize effectively and perform robustly in real-world scenarios. This approach represents a critical step in our study's overarching goal of advancing plant disease detection, especially in challenging conditions, and underscores the importance of dataset optimization in deep learning applications.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.Cloud computing refers to a sophisticated technology that deals with the manipulation of data in internet-based servers dynamically and efficiently. The utilization of the cloud computing has been rapidly increased because of its scalability, accessibility, and incredible flexibility. Dynamic usage and process sharing facilities require task scheduling which is a prominent issue and plays a significant role in developing an optimal cloud computing environment. Round robin is generally an efficient task scheduling algorithm that has a powerful impact on the performance of the cloud computing environment. This paper introduces a new approach for round robin based task scheduling algorithm which is suitable for cloud computing environment. The proposed algorithm determines time quantum dynamically based on the differences among three maximum burst time of tasks in the ready queue for each round. The concerning part of the proposed method is utilizing additive manner among the differences, and the burst times of the processes during determining the time quantum. The experimental results showed that the proposed approach has enhanced the performance of the round robin task scheduling algorithm in reducing average turn-around time, diminishing average waiting time, and minimizing number of contexts switching. Moreover, a comparative study has been conducted which showed that the proposed approach outperforms some of the similar existing round robin approaches. Finally, it can be concluded based on the experiment and comparative study that the proposed dynamic round robin scheduling algorithm is comparatively better, acceptable and optimal for cloud environment.
[...] Read more.Predicting the student performance is playing vital role in educational sector so that the analysis of student’s status helps to improve for better performance. Applying data mining concepts and algorithms in the field of education is Educational Data Mining. In recent days, Machine learning algorithms are very much useful in almost all the fields. Many researchers used machine learning algorithms only. In this paper we proposed the student performance prediction system using Deep Neural Network. We trained the model and tested with Kaggle dataset using different algorithms such as Decision Tree (C5.0), Naïve Bayes, Random Forest, Support Vector Machine, K-Nearest Neighbor and Deep neural network in R Programming and compared the accuracy of all other algorithms. Among six algorithms Deep Neural Network outperformed with 84% as accuracy.
[...] Read more.The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
[...] Read more.