International Journal of Intelligent Systems and Applications (IJISA)

ISSN: 2074-904X (Print)

ISSN: 2074-9058 (Online)

DOI: https://doi.org/10.5815/ijisa

Website: https://www.mecs-press.org/ijisa

Published By: MECS Press

Frequency: 6 issues per year

Number(s) Available: 141

(IJISA) in Google Scholar Citations / h5-index

IJISA is committed to bridge the theory and practice of intelligent systems. From innovative ideas to specific algorithms and full system implementations, IJISA publishes original, peer-reviewed, and high quality articles in the areas of intelligent systems. IJISA is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of intelligent systems and applications.

 

IJISA has been abstracted or indexed by several world class databases:  Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..

Latest Issue
Most Viewed
Most Downloaded

IJISA Vol. 18, No. 1, Feb. 2026

REGULAR PAPERS

Monitoring Phase Increments in Fuel Supply Adjustment Based on Correlation Analysis of Indirect Measurement Data

By Oleksandr Yenikieiev Fatima Yevsyukova Borysenko Anatolii Dmytro Zakharenkov Hanna Martyniuk Dauriya Zhaksigulova

DOI: https://doi.org/10.5815/ijisa.2026.01.01, Pub. Date: 8 Feb. 2026

It is proposed to use correlation analysis methods to process indirect measurement data when monitoring the incremental сylindrical phase delays referenced to the first cylinder. A method is proposed for restoring the optimal parameters of stratified charge delivery to the combustion chambers of piston-type internal combustion engines. TThe conceptual framework for the development of software and hardware systems incorporating feedback control based on the state of the measurement signal, represented by crankshaft rotational irregularities, has been established. A deterministic mathematical model representing the torque transmission architecture of the powertrain is formulated as a mechanical system with three degrees of freedom, taking into account energy dissipation due to friction. The motions of the masses of the mathematical model are described by a deterministic system of linear differential equations. The parameters of these equations are normalised using the theorems and methods of similarity theory. The Laplace transform under zero initial conditions is applied to solve the resulting system of differential equations. Using the method of determinants and the Mathcad software environment, the information links between the cylinder torques and the signal of uneven rotation of the first crankshaft mass were established. In the Matlab software environment, special points were identified and a simplified representation of the torque transfer functions was obtained as a result of their analysis. A limited Fourier series using Mathcad software approximated the cylinder torques. A computational scheme was developed for simulating deterministic signals characterizing the rotational irregularity of the first crankshaft mass. The additive disturbance superimposed on the measurement signal is modeled as structured white noise with a frequency spectrum constrained to ten harmonic components.
Within the computer modeling framework, the output signal generation utilizes an approach based on the regulation of information pathway lengths in neural network structures to define the gain coefficients corresponding to the aggregated torque amplitudes of individual cylinders. For the first time, an auxiliary algorithm was developed to monitor incremental phase delays of the cylinders relative to the reference (first) cylinder by calculating the mutual correlation function between the rotational irregularity signal of the first crankshaft mass and the torque output of the first cylinder. The software application for calculating the reciprocal correlation function is implemented in the program Mathcad. As a result of the analysis of the mutual correlation function graph, three distinct maxima were identified. The initial peak of the computed mutual correlation function corresponds to the phase associated with the nominal torque generation of the second combustion chamber. The second peak reflects the standard torque phase of the third chamber, while the third peak indicates the reference phase for the first combustion unit. Furthermore, the proportional values of these maxima align with the gain factors assigned to each cylinder's torque in the computational summation scheme. The cross-correlation between the processed measurement signal and the torque signal of the first cylinder was evaluated under conditions of additive stochastic interference. Analysis of the correlation curve demonstrates that a measurement uncertainty of approximately 14% in the rotational non-uniformity of the primary crankshaft mass does not preclude the effective application of correlation analysis techniques for phase shift tracking in the fuel delivery timing across engine cylinders.

[...] Read more.
Personalized Cardiovascular Risk Reduction: A Hybrid Recommendation Approach Using Generative Adversarial Networks and Machine Learning

By Arundhati Uplopwar Rashmi Vashisth Arvinda Kushwaha

DOI: https://doi.org/10.5815/ijisa.2026.01.02, Pub. Date: 8 Feb. 2026

Cardiovascular disease (CVD) is a leading cause of death worldwide and hence requires early risk assessment and focused preventative measures. The study describes a novel two-phase hybrid approach that combines machine learning-based CVD risk prediction and personalized lifestyle advice. In the first phase, cardiovascular risk is estimated using ensemble classifier that combines Random Forest Classifier, SVM and LR using metal learner trained on the Heart Disease dataset (1000 record, 14 attributes) has excellent predictive accuracy. In the second phase, optimization framework produces lifestyle suggestions that are safe for health within clinically determined parameters, which are enhanced using a hybrid recommendation system that combines content-based and Cluster-based Outcome Analysis. The suggested approach considerably outperformed a baseline of general lifestyle recommendations in a simulated high-risk cohort, exhibiting an average relative risk reduction of [X] % over a 10-year period as determined by the Framingham Risk Score. The suggested approach is made to be validated in future research using external datasets, simulated patient trials, and physician evaluation in order to guarantee clinical relevance This methodology highlights the promise for precision cardiovascular prevention by providing personalized, data-driven lifestyle recommendations. 

[...] Read more.
Non-invasive A, B and O Blood Group Identification from Ocular Images Using a Hybrid Multi-modal Deep Learning Approach

By Venkatesh Koreddi Kattupalli Sudhakar M. Lavanya G. Gayathri K. Sri Venkata Naga Gowri Deepika K. Naga Surya Sabari Prasad G. Balasri Lakshmi Vishnupriya

DOI: https://doi.org/10.5815/ijisa.2026.01.03, Pub. Date: 8 Feb. 2026

Traditional blood group identification methods, such as serological testing or fingerprint based biometric analysis, require physical contact, specialized equipment and laboratory processing. To remove these boundaries, this study proposes a novel, which is a completely contact -free approach to determine A, B and O blood groups using ocular image analysis. Unlike the previous methods that rely on fingerprint or vein pattern, our technique takes advantage of iris color, conjunctival vasculature, limbal ring intensity, and other eye field features to classify blood group types. A custom dataset of 3,000 eye images was collected from diverse demographics under different lighting conditions. The key features were extracted using hyperspectral imaging and deep learning-based segmentation. We introduce a hybrid multi-modal attention network (HMAN), which integrates transformer-based spatial encoding, convolutional feature extraction, and self-attention mechanisms to enhance classification accuracy. The proposed model obtained 97.1%accuracy, improved ResNet-50 (92.3%) and KushalNet-B4 (94.5%). Ablation studies confirmed that multi-modal feature fusion improves discriminatory capacity for blood group-specific patterns.
This work establishes the first AI-operated, non-invasive blood group detection framework with emergency medical, blood donor screening, and potential applications in biometric diagnostics. Future research will focus on real-time deployment, dataset expansion, and multi-modal physiological feature integration to improve robustness. Our findings represent a major advancement in contact-free medical diagnosis, which paves the way for AI enhance hematological classification.

[...] Read more.
A Novel Hybrid Model for Brain Tumor Analysis Using Dual Attention AtroDense U-Net and Auction Optimized LSTM Network

By S. K. Rajeev M. Pallikonda Rajasekaran R. Kottaimalai T. Arunprasath Nisha A.V. Abdul Khader Jilani Saudagar

DOI: https://doi.org/10.5815/ijisa.2026.01.04, Pub. Date: 8 Feb. 2026

Timely identification of brain tumors helps improve treatment outcomes and reduces mortality. Accurate and non-invasive diagnostic tools for segmenting and classifying tumor regions in brain MRI scans are crucial for minimizing the need for surgical biopsies. This study builds a deep learning model for tumor segmentation and classification, aiming high accuracy and efficiency. A gaussian bilateral filter is used for noise reduction and to improve MRI image quality. Tumor segmentation is performed using an advanced U-Net model, the Dual Attention AtroDense U-Net (DA-AtroDense U-Net), which integrates dense connections, atrous convolution and attention mechanisms to preserve spatial detail and improve boundary localization. Texture-based radiomic features are subsequently extracted from the segmented tumor  
region using Kirsch Edge Detector (KED) and Gray-Level Co-occurrence Matrix (GLCM) and refined through feature selection to reduce redundancy using the Cat-and-Mouse Optimization (CMO) algorithm. Tumor classification employs an Auction-Optimized hybrid LSTM Network (AOHLN). Evaluated on BraTS 2019 and 2020 datasets, the developed model achieved a Dice Similarity Coefficient of 0.9907 and a Jaccard Index of 0.9816 for segmentation accuracy and an overall accuracy of 98.99% for classification, highlighting its potential as a dependable and non-invasive diagnostic solution.

[...] Read more.
Parameter Optimisation of Type 1 and Interval Type 2 Fuzzy Logic Controllers for Performance Improvement of Industrial Control System

By Desislava R. Stoitseva-Delicheva Snejana T. Yordanova

DOI: https://doi.org/10.5815/ijisa.2026.01.05, Pub. Date: 8 Feb. 2026

The fuzzy logic controllers (FLC) gain popularity in ensuring stable and high-performance control of nonlinear industrial plants with no reliable model, where the traditional controllers fail. Their standard expert-based design and simple algorithms that meet the demands for fast execution and economical use of computational resources ease their implementation into programmable logic controllers for wide industrial real-time control applications. This research presents a novel approach to enhancing the performance of FLC systems by compensating for the subjectivity inherent in expert-based design through optimization of the parameters of type-1 (T1) and interval type-2 (IT2) PID FLC membership functions (MF) using genetic algorithms. The approach is demonstrated for controlling the solution level in a carbonization column for soda ash production. Simulations reveal that optimization improves the system performance, measured by a newly introduced overall performance indicator for dynamic accuracy, robustness, and control smoothness, by 48% for the T1 FLC system and 30% for the IT2 FLC system. No improvement is observed in the substitute of T1 MF by IT2 MF for both the empirically designed and the optimised FLC.

[...] Read more.
Depth-guided Hybrid Attention Swin Transformer for Physics-guided Self-supervised Image Dehazing

By Rahul Vishnoi Alka Verma Vibhor Kumar Bhardwaj

DOI: https://doi.org/10.5815/ijisa.2026.01.06, Pub. Date: 8 Feb. 2026

Image dehazing is a critical preprocessing step in computer vision, enhancing visibility in degraded conditions. Conventional supervised methods often struggle with generalization and computational efficiency. This paper introduces a self-supervised image dehazing framework leveraging a depth-guided Swin Transformer with hybrid attention. The proposed hybrid attention explicitly integrates CNN-style channel and spatial attention with Swin Transformer window-based self-attention, enabling simultaneous local feature recalibration and global context aggregation. By integrating a pre-trained monocular depth estimation model and a Swin Transformer architecture with shifted window attention, our method efficiently models global context and preserves fine details. Here, depth is used as a relative structural prior rather than a metric quantity, enabling robust guidance without requiring haze-invariant depth estimation. Experimental results on synthetic and real-world benchmarks demonstrate superior performance, with a PSNR of 23.01 dB and SSIM of 0.879 on the RESIDE SOTS-indoor dataset, outperforming classical physics-based dehazing (DCP) and recent self-supervised approaches such as SLAD, achieving a PSNR gain of 2.52 dB over SLAD and 6.39 dB over DCP. Our approach also significantly improves object detection accuracy by 0.15 mAP@0.5 (+32.6%) under hazy conditions, and achieves near real-time inference (≈35 FPS at 256x256 resolution on a single GPU), confirming the practical utility of depth-guided features. Here, we show that our method achieves an SSIM of 0.879 on SOTS-Indoor, indicating strong structural and color fidelity for a self-supervised dehazing framework.

[...] Read more.
Data-driven Classification of Tsunami Evacuation Suitability Using XGBoost: A Case Study in Padang City

By Sularno Sularno Wendi Boy Putri Anggraini Ahmad Kamal Fei Wang

DOI: https://doi.org/10.5815/ijisa.2026.01.07, Pub. Date: 8 Feb. 2026

In this research, we established a machine learning–based model to predict the suitability of tsunami evacuation locations in Padang City through the Extreme Gradient Boosting (XGBoost) method. We trained the model on a new synthetic dataset with 5,000 observations with key geospatial and demographic features such as elevation, distance to coastline, suggested evacuation capacity, surrounding population count and site area. The analysis process consisted of preprocessing, feature selection utilizing the XGBoost Classifier, training and cross-validation on each model, and evaluation through regression as well as classification metrics. The XGBoost model performed best (RMSE=0.0642, MAE=0.0418 and Accuracy=93.8%), which was even better than Random Forest, Gradient Boosting Trees and Logistic Regression models. These findings demonstrate that XGBoost can successfully extract complicated spatial–demographic associations with little overfitting. The residual analysis and the actual-vs-predicted plots also reveal good model calibration and stability. A web prototype was also created to visualize the suitability of evacuation and facilitate spatial decision making. Although the model is based on simulated data, it offers an extendible and interpretable framework to be integrated in practical scenarios with field and operational disaster management systems. To the best of our knowledge, this work represents the first use of XGBoost algorithm in Indonesia to classify tsunami evacuation sites and functions as a new tool for disaster preparedness and evacuation plans on the coast.

[...] Read more.
Self-adaptive Resource Allocation in Fog-Cloud Systems Using Multi-agent Deep Reinforcement Learning with Meta-learning

By Tapas K. Das Santosh K. Das Swarupananda Bissoyi Deepak K. Patel

DOI: https://doi.org/10.5815/ijisa.2026.01.08, Pub. Date: 8 Feb. 2026

The rapid growth of IoT ecosystems has intensified the complexity of fog–cloud infrastructures, necessitating adaptive and energy-efficient task offloading strategies. This paper proposes MADRL-MAML, a Multi-Agent Deep Reinforcement Learning framework enhanced with Model-Agnostic Meta-Learning for dynamic fog–cloud resource allocation. The approach integrates curriculum learning, centralized attention-based critics, and KL-divergence regularization to ensure stable convergence and rapid adaptation to unseen workloads. A unified cost-based reward formulation is used, where less negative values indicate better joint optimization of energy, latency, and utilization. MADRL-MAML is benchmarked against six baselines Greedy, Random, Round-Robin, PPO, Federated PPO, and Meta-RL using consistent energy, latency, utilization, and reward metrics. Across these baselines, performance remains similar: energy (3.64–3.71 J), latency (85.4–86.7 ms), and utilization (0.51–0.54). MADRL-MAML achieves substantially better results with a reward of $-21.92 \pm 3.88$, energy 1.16 J, latency 12.80 ms, and utilization 0.39, corresponding to 68\% lower energy and 85\% lower latency than Round-Robin. For unseen workloads characterized by new task sizes, arrival rates, and node heterogeneity, the meta-learned variant (MADRL-MAML-Unseen) achieves a reward of $-6.50 \pm 3.98$, energy 1.14 J, latency 12.76 ms, and utilization 0.73, demonstrating strong zero-shot generalization. Experiments were conducted in a realistic simulated environment with 10 fog and 2 cloud nodes, heterogeneous compute capacities, and Poisson task arrivals. Inference latency remains below 5 ms, confirming real-time applicability. Overall, MADRL-MAML provides a scalable and adaptive solution for energy-efficient and latency-aware orchestration in fog–cloud systems.

[...] Read more.
Enhanced MRI Segmentation and Severity Classification of Parkinson’s Disease Using Hierarchical Diffusion-driven Attention Model

By Redhya M. M. Jayalakshmi Rajermani Thinakaran

DOI: https://doi.org/10.5815/ijisa.2026.01.09, Pub. Date: 8 Feb. 2026

Early identification of Parkinson's disease (PD) from MRI remains challenging due to subtle structural alterations and the complexity of brain tissues. To address these challenges, this paper proposes a hierarchical framework termed Hierarchical Severity-Adaptive Diffusion Network, composed of three sequentially connected phases, where the output of each phase serves as input to the next for task-specific optimization. In the first phase, a graph diffusion-based convolutional network is employed to extract anatomical and structural features from multi-modal MRI data, enabling accurate segmentation of PD-relevant regions. Phase two introduces an edge-enhanced slice-aware recurrent network that incorporates Wiener filters and Sobel-based edge enhancement to reduce noise and partial volume effects while capturing structural continuity across adjacent MRI slices. Finally, for severity classification, non-linear severity-adaptive attention network is introduced, which emphasizes discriminative feature deterioration patterns across stages. This model uses Figshare PD dataset and demonstrates superior performance compared to established models like DenseNet121, VGG16, ResNet, MobileNet and Inception-V3, and achieves high accuracy (98.67), precision (0.99), recall (0.98), and F1 score (0.99), indicating its potential as an AI-assisted tool for PD severity assessment using MRI.

[...] Read more.
Classification of Medicinal Plant Leaves using Deep Learning Algorithms

By Aruna S. K. Praveen P. Gowtham K. Mohammed Khashif S. Keerthana Jaganathan K. Karthick

DOI: https://doi.org/10.5815/ijisa.2026.01.10, Pub. Date: 8 Feb. 2026

This research explores the automated leaf-based identification of medicinal plants, utilizing machine learning and deep learning techniques to address the crucial need for efficient plant classification. Driven by the vast potential of medicinal plants in pharmaceutical development and healthcare, the study aims to surpass the limitations of existing methodologies through thorough experimentation and comparative analysis. The primary goal is to develop a robust and automated solution for classifying medicinal plants based on leaf morphology. The methodology encompasses acquiring diverse datasets. Specifically, Set 1 data is processed by applying resizing, rescaling, saturation adjustment, and noise removal, while Set 2 data is processed by applying resizing, rescaling, saturation adjustment, noise removal, and PCA (Principal Component Analysis). The proposed algorithms include Support Vector Machines (SVM), Convolutional Neural Networks (CNNs), YOLOv8, Vision Transformer (ViT), ResNet, and Artificial Neural Networks (ANN). The study evaluates the efficacy and effectiveness of each algorithm in plant classification using metrics such as accuracy, recall, precision, and F1 score. Notably, the ResNet model achieved 93.8% and 94.8% accuracy in Set 1 and Set 2, respectively. The SVM model demonstrated 56.5% and 56.6% accuracy in Set 1 and Set 2, while the Vision Transformer (ViT) model achieved 84.9% and 74.4% accuracy in Set 1 and Set 2, respectively. The CNN model showcased high accuracy at 96.7% and 94.8% in Set 1 and Set 2, followed closely by the ANN model with 96.7% and 96.6% accuracy. Lastly, the YOLOv8 model achieved 96.0% and 95.1% accuracy in Set 1 and Set 2, respectively. The comparative analysis identifies CNN and ANN as the top-performing algorithms. This research significantly contributes to the advancement of medicinal plant identification, pharmaceutical research, and environmental conservation efforts, emphasizing the potential of deep learning techniques in addressing complex classification tasks.

[...] Read more.
Analysis of Cyberbullying Incidence among Filipina Victims: A Pattern Recognition using Association Rule Extraction

By Frederick F. Patacsil

DOI: https://doi.org/10.5815/ijisa.2019.11.05, Pub. Date: 8 Nov. 2019

Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.

[...] Read more.
Blockchain with Internet of Things: Benefits, Challenges, and Future Directions

By Hany F. Atlam Ahmed Alenezi Madini O. Alassafi Gary B. Wills

DOI: https://doi.org/10.5815/ijisa.2018.06.05, Pub. Date: 8 Jun. 2018

The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.

[...] Read more.
Predicting Stock Market Behavior using Data Mining Technique and News Sentiment Analysis

By Ayman E. Khedr S.E.Salama Nagwa Yaseen

DOI: https://doi.org/10.5815/ijisa.2017.07.03, Pub. Date: 8 Jul. 2017

Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.

[...] Read more.
Data Mining of Students’ Performance: Turkish Students as a Case Study

By Oyebade Kayode Oyedotun Sam Nii Tackie Ebenezer Obaloluwa Olaniyi Khashman Adnan

DOI: https://doi.org/10.5815/ijisa.2015.09.03, Pub. Date: 8 Aug. 2015

Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.

[...] Read more.
Graph Coloring in University Timetable Scheduling

By Swapnil Biswas Syeda Ajbina Nusrat Nusrat Sharmin Mahbubur Rahman

DOI: https://doi.org/10.5815/ijisa.2023.03.02, Pub. Date: 8 Jun. 2023

Addressing scheduling problems with the best graph coloring algorithm has always been very challenging. However, the university timetable scheduling problem can be formulated as a graph coloring problem where courses are represented as vertices and the presence of common students or teachers of the corresponding courses can be represented as edges. After that, the problem stands to color the vertices with lowest possible colors. In order to accomplish this task, the paper presents a comparative study of the use of graph coloring in university timetable scheduling, where five graph coloring algorithms were used: First Fit, Welsh Powell, Largest Degree Ordering, Incidence Degree Ordering, and DSATUR. We have taken the Military Institute of Science and Technology, Bangladesh as a test case. The results show that the Welsh-Powell algorithm and the DSATUR algorithm are the most effective in generating optimal schedules. The study also provides insights into the limitations and advantages of using graph coloring in timetable scheduling and suggests directions for future research with the use of these algorithms.

[...] Read more.
Sentiment Analysis: A Perspective on its Past, Present and Future

By Akshi Kumar Teeja Mary Sebastian

DOI: https://doi.org/10.5815/ijisa.2012.10.01, Pub. Date: 8 Sep. 2012

The proliferation of Web-enabled devices, including desktops, laptops, tablets, and mobile phones, enables people to communicate, participate and collaborate with each other in various Web communities, viz., forums, social networks, blogs. Simultaneously, the enormous amount of heterogeneous data that is generated by the users of these communities, offers an unprecedented opportunity to create and employ theories & technologies that search and retrieve relevant data from the huge quantity of information available and mine for opinions thereafter. Consequently, Sentiment Analysis which automatically extracts and analyses the subjectivities and sentiments (or polarities) in written text has emerged as an active area of research. This paper previews and reviews the substantial research on the subject of sentiment analysis, expounding its basic terminology, tasks and granularity levels. It further gives an overview of the state- of – art depicting some previous attempts to study sentiment analysis. Its practical and potential applications are also discussed, followed by the issues and challenges that will keep the field dynamic and lively for years to come.

[...] Read more.
Machine Learning for Weather Forecasting: XGBoost vs SVM vs Random Forest in Predicting Temperature for Visakhapatnam

By Deep Karan Singh Nisha Rawat

DOI: https://doi.org/10.5815/ijisa.2023.05.05, Pub. Date: 8 Oct. 2023

Climate change, a significant and lasting alteration in global weather patterns, is profoundly impacting the stability and predictability of global temperature regimes. As the world continues to grapple with the far-reaching effects of climate change, accurate and timely temperature predictions have become pivotal to various sectors, including agriculture, energy, public health and many more. Crucially, precise temperature forecasting assists in developing effective climate change mitigation and adaptation strategies. With the advent of machine learning techniques, we now have powerful tools that can learn from vast climatic datasets and provide improved predictive performance. This study delves into the comparison of three such advanced machine learning models—XGBoost, Support Vector Machine (SVM), and Random Forest—in predicting daily maximum and minimum temperatures using a 45-year dataset of Visakhapatnam airport. Each model was rigorously trained and evaluated based on key performance metrics including training loss, Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), R2 score, Mean Absolute Percentage Error (MAPE), and Explained Variance Score. Although there was no clear dominance of a single model across all metrics, SVM and Random Forest showed slightly superior performance on several measures. These findings not only highlight the potential of machine learning techniques in enhancing the accuracy of temperature forecasting but also stress the importance of selecting an appropriate model and performance metrics aligned with the requirements of the task at hand. This research accomplishes a thorough comparative analysis, conducts a rigorous evaluation of the models, highlights the significance of model selection.

[...] Read more.
Machine Learning in Cyberbullying Detection from Social-Media Image or Screenshot with Optical Character Recognition

By Tofayet Sultan Nusrat Jahan Ritu Basak Mohammed Shaheen Alam Jony Rashidul Hasan Nabil

DOI: https://doi.org/10.5815/ijisa.2023.02.01, Pub. Date: 8 Apr. 2023

Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.

[...] Read more.
Non-Functional Requirements Classification Using Machine Learning Algorithms

By Abdur Rahman Abu Nayem Saeed Siddik

DOI: https://doi.org/10.5815/ijisa.2023.03.05, Pub. Date: 8 Jun. 2023

Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.

[...] Read more.
A New EEG Acquisition Protocol for Biometric Identification Using Eye Blinking Signals

By Mohammed Abo-Zahhad Abo-Zeid Sabah M. Ahmed Sherif N. Abbas

DOI: https://doi.org/10.5815/ijisa.2015.06.05, Pub. Date: 8 May 2015

In this paper, a new acquisition protocol is adopted for identifying individuals from electroencephalogram signals based on eye blinking waveforms. For this purpose, a database of 10 subjects is collected using Neurosky Mindwave headset. Then, the eye blinking signal is extracted from brain wave recordings and used for the identification task. The feature extraction stage includes fitting the extracted eye blinks to auto-regressive model. Two algorithms are implemented for auto-regressive modeling namely; Levinson-Durbin and Burg algorithms. Then, discriminant analysis is adopted for classification scheme. Linear and quadratic discriminant functions are tested and compared in this paper. Using Burg algorithm with linear discriminant analysis, the proposed system can identify subjects with best accuracy of 99.8%. The obtained results in this paper confirm that eye blinking waveform carries discriminant information and is therefore appropriate as a basis for person identification methods.

[...] Read more.
Analysis of Cyberbullying Incidence among Filipina Victims: A Pattern Recognition using Association Rule Extraction

By Frederick F. Patacsil

DOI: https://doi.org/10.5815/ijisa.2019.11.05, Pub. Date: 8 Nov. 2019

Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.

[...] Read more.
Data Mining of Students’ Performance: Turkish Students as a Case Study

By Oyebade Kayode Oyedotun Sam Nii Tackie Ebenezer Obaloluwa Olaniyi Khashman Adnan

DOI: https://doi.org/10.5815/ijisa.2015.09.03, Pub. Date: 8 Aug. 2015

Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.

[...] Read more.
Graph Coloring in University Timetable Scheduling

By Swapnil Biswas Syeda Ajbina Nusrat Nusrat Sharmin Mahbubur Rahman

DOI: https://doi.org/10.5815/ijisa.2023.03.02, Pub. Date: 8 Jun. 2023

Addressing scheduling problems with the best graph coloring algorithm has always been very challenging. However, the university timetable scheduling problem can be formulated as a graph coloring problem where courses are represented as vertices and the presence of common students or teachers of the corresponding courses can be represented as edges. After that, the problem stands to color the vertices with lowest possible colors. In order to accomplish this task, the paper presents a comparative study of the use of graph coloring in university timetable scheduling, where five graph coloring algorithms were used: First Fit, Welsh Powell, Largest Degree Ordering, Incidence Degree Ordering, and DSATUR. We have taken the Military Institute of Science and Technology, Bangladesh as a test case. The results show that the Welsh-Powell algorithm and the DSATUR algorithm are the most effective in generating optimal schedules. The study also provides insights into the limitations and advantages of using graph coloring in timetable scheduling and suggests directions for future research with the use of these algorithms.

[...] Read more.
Predicting Stock Market Behavior using Data Mining Technique and News Sentiment Analysis

By Ayman E. Khedr S.E.Salama Nagwa Yaseen

DOI: https://doi.org/10.5815/ijisa.2017.07.03, Pub. Date: 8 Jul. 2017

Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.

[...] Read more.
Machine Learning in Cyberbullying Detection from Social-Media Image or Screenshot with Optical Character Recognition

By Tofayet Sultan Nusrat Jahan Ritu Basak Mohammed Shaheen Alam Jony Rashidul Hasan Nabil

DOI: https://doi.org/10.5815/ijisa.2023.02.01, Pub. Date: 8 Apr. 2023

Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.

[...] Read more.
Non-Functional Requirements Classification Using Machine Learning Algorithms

By Abdur Rahman Abu Nayem Saeed Siddik

DOI: https://doi.org/10.5815/ijisa.2023.03.05, Pub. Date: 8 Jun. 2023

Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.

[...] Read more.
Optimized Round Robin Scheduling Algorithm Using Dynamic Time Quantum Approach in Cloud Computing Environment

By Dipto Biswas Md. Samsuddoha Md. Rashid Al Asif Md. Manjur Ahmed

DOI: https://doi.org/10.5815/ijisa.2023.01.03, Pub. Date: 8 Feb. 2023

Cloud computing refers to a sophisticated technology that deals with the manipulation of data in internet-based servers dynamically and efficiently. The utilization of the cloud computing has been rapidly increased because of its scalability, accessibility, and incredible flexibility. Dynamic usage and process sharing facilities require task scheduling which is a prominent issue and plays a significant role in developing an optimal cloud computing environment. Round robin is generally an efficient task scheduling algorithm that has a powerful impact on the performance of the cloud computing environment. This paper introduces a new approach for round robin based task scheduling algorithm which is suitable for cloud computing environment. The proposed algorithm determines time quantum dynamically based on the differences among three maximum burst time of tasks in the ready queue for each round. The concerning part of the proposed method is utilizing additive manner among the differences, and the burst times of the processes during determining the time quantum. The experimental results showed that the proposed approach has enhanced the performance of the round robin task scheduling algorithm in reducing average turn-around time, diminishing average waiting time, and minimizing number of contexts switching. Moreover, a comparative study has been conducted which showed that the proposed approach outperforms some of the similar existing round robin approaches. Finally, it can be concluded based on the experiment and comparative study that the proposed dynamic round robin scheduling algorithm is comparatively better, acceptable and optimal for cloud environment.

[...] Read more.
Blockchain with Internet of Things: Benefits, Challenges, and Future Directions

By Hany F. Atlam Ahmed Alenezi Madini O. Alassafi Gary B. Wills

DOI: https://doi.org/10.5815/ijisa.2018.06.05, Pub. Date: 8 Jun. 2018

The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.

[...] Read more.
Plant Disease Detection Using Deep Learning

By Bahaa S. Hamed Mahmoud M. Hussein Afaf M. Mousa

DOI: https://doi.org/10.5815/ijisa.2023.06.04, Pub. Date: 8 Dec. 2023

Agricultural development is a critical strategy for promoting prosperity and addressing the challenge of feeding nearly 10 billion people by 2050. Plant diseases can significantly impact food production, reducing both quantity and diversity. Therefore, early detection of plant diseases through automatic detection methods based on deep learning can improve food production quality and reduce economic losses. While previous models have been implemented for a single type of plant to ensure high accuracy, they require high-quality images for proper classification and are not effective with low-resolution images. To address these limitations, this paper proposes the use of pre-trained model based on convolutional neural networks (CNN) for plant disease detection. The focus is on fine-tuning the hyperparameters of popular pre-trained model such as EfficientNetV2S, to achieve higher accuracy in detecting plant diseases in lower resolution images, crowded and misleading backgrounds, shadows on leaves, different textures, and changes in brightness. The study utilized the Plant Diseases Dataset, which includes infected and uninfected crop leaves comprising 38 classes. In pursuit of improving the adaptability and robustness of our neural networks, we intentionally exposed them to a deliberately noisy training dataset. This strategic move followed the modification of the Plant Diseases Dataset, tailored to better suit the demands of our training process. Our objective was to enhance the network's ability to generalize effectively and perform robustly in real-world scenarios. This approach represents a critical step in our study's overarching goal of advancing plant disease detection, especially in challenging conditions, and underscores the importance of dataset optimization in deep learning applications.

[...] Read more.
Detection and Classification of Alzheimer’s Disease by Employing CNN

By Smt. Swaroopa Shastri Ambresh Bhadrashetty Supriya Kulkarni

DOI: https://doi.org/10.5815/ijisa.2023.02.02, Pub. Date: 8 Apr. 2023

Alzheimer’s illness is an ailment of mind which results in mental confusion, forgetfulness and many other mental problems. It effects physical health of a person too. When treating a patient with Alzheimer's disease, a proper diagnosis is crucial, especially into earlier phases of condition as when patients are informed of the risk of the disease, they can take preventative steps before irreparable brain damage occurs. The majority of machine detection techniques are constrained by congenital (present at birth) data, however numerous recent studies have used computers for Alzheimer's disease diagnosis. The first stages of Alzheimer's disease can be diagnosed, but illness itself cannot be predicted since prediction is only helpful before it really manifests. Alzheimer’s has high risk symptoms that effects both physical and mental health of a patient. Risks include confusion, concentration difficulties and much more, so with such symptoms it becomes important to detect this disease at its early stages. Significance of detecting this disease is the patient gets a better chance of treatment and medication. Hence our research helps to detect the disease at its early stages. Particularly when used with brain MRI scans, deep learning has emerged as a popular tool for the early identification of AD. Here we are using a 12- layer CNN that has the layers four convolutional, two pooling, two flatten, one dense and three activation functions. As CNN is well-known for pattern detection and image processing, here, accuracy of our model is 97.80%.

[...] Read more.