International Journal of Information Technology and Computer Science (IJITCS)

IJITCS Vol. 18, No. 2, Apr. 2026

Cover page and Table of Contents: PDF (size: 239KB)

Table Of Contents

REGULAR PAPERS

A Study of Using Convolutional Neural Networks in Fractal Dimension Estimation of Grayscale and Color Images

By Moheb R. Girgis Al Hussien Seddik Saad Mohammed M. Talaat

DOI: https://doi.org/10.5815/ijitcs.2026.02.01, Pub. Date: 8 Apr. 2026

Fractal dimension (FD) estimation is widely used to characterize image complexity and self-similarity in image analysis and texture characterization. Traditional FD estimators such as Box-Counting (BC) and Differential Box-Counting (DBC) are simple and efficient but can be sensitive to scale selection, resolution, and noise. This paper investigates the effectiveness of using convolutional neural network (CNN) in FD estimation compared to traditional methods. To this end, we have developed a CNN-based method for FD estimation under a fair and reproducible evaluation design. First, we have included an analytic-fractals benchmark (such as Sierpinski and Koch families) with closed-form FD values for independent evaluation. Second, for large-scale Julia/Mandelbrot images, FD labels are treated as reference estimates computed from multiple BC/DBC parameter settings and reported as mean ± standard deviation to quantify label uncertainty. We additionally assess behavior on an external natural-texture dataset and evaluate robustness under controlled degradations (noise, blur, compression, and downsampling). Performance is reported on large test sets using MAE/RMSE with 95% confidence intervals (bootstrap), together with per-image inference time under clearly specified hardware settings. Results indicate that the proposed CNN-based method provides stable FD estimation and fast inference, particularly under noise and resolution variations. 

[...] Read more.
The Role of Artificial Intelligence in the Career Expectations of Ukrainian Students: Implications for Higher Education

By Olena Semenikhina Marina Drushlyak

DOI: https://doi.org/10.5815/ijitcs.2026.02.02, Pub. Date: 8 Apr. 2026

The article presents the results of an empirical study on the attitudes of students at Ukrainian higher education institutions toward the role of artificial intelligence (AI), particularly ChatGPT, in the context of their future professional careers. The aim of this study is to determine whether students perceive ChatGPT (a generative AI tool) as a threat, an opportunity, or a multidimensional phenomenon that requires critical evaluation. The research methodology included the construction of two composite indices. These were the ChatGPT Opportunities Index and the ChatGPT Threats Index, both related to career development. The indices were based on responses from 354 students. All participants took part in the international "Global ChatGPT Student Survey". Data analysis employed descriptive statistics, analysis of variance (ANOVA), correlation analysis, clustering, and the χ² test. The results showed that the ChatGPT Opportunities Index was moderately higher than the ChatGPT Threats Index. This indicates a predominantly cautious optimism in students’ attitudes toward AI. At the same time, statistical analysis did not reveal any significant relationships between these indices and such variables as level of education, gender, or confidence in future employment. Cluster analysis identified three types of student attitudes: Realists, Reflective Optimists, and Disengaged. A synthesis of the results indicates that students show both interest in ChatGPT and a need for support from educational institutions in developing critical interaction skills with intelligent technologies. The study concludes that there is a need to integrate AI literacy into academic programs. It also highlights the importance of developing interdisciplinary training models and implementing educational interventions that foster adaptability and digital resilience among students.

[...] Read more.
Underwater Image Dehazing: A Comprehensive Approach

By Sumalatha A. Aruna S. K.

DOI: https://doi.org/10.5815/ijitcs.2026.02.03, Pub. Date: 8 Apr. 2026

Underwater imaging in recent times has advanced by trying to correct color distortion, increase contrast, and increase image clarity if the light need is less. The use of deep learning has been effective in enhancing image quality, but challenges persist in the decompression process due to data inconsistencies. In order to do this a new scheme is proposed in this study. Unlike other methods which depend only on the single images captured, here an attempt is made to use images taken in other conditions to overcome this limitation, by using the model to try and improve such underwater images in general irrespective of the water conditions. A key innovation is the disassembly and synthesis of multi-channel illuminance data. Specifically, we decompose the input image into its red, green, and blue frequencies, and then approximate the illuminance component within each channel. By independently manipulating and reconstructing these channel-specific illuminance maps, we can effectively address the non-uniform light scattering and absorption that are characteristic of underwater environments. This allows us to correct for the inherent color casts and haze that degrade image quality. To further refine the enhancement, we incorporate, advanced color correction methods such as image saliency exploration and white balance adjustment to compensate for color attenuation caused by light absorption at different depths. These techniques effectively restore lost colors and enhance contrast, thereby improving image clarity and sharpness.  This is helpful in the field of engineering and also forms the foundation for further exploring methods of improving images captured underwater. Investigational outcomes exhibit that the intended method ominously augments image eminence, making it highly effective for underwater detection and exploration tasks, offering an innovative solution for hazy images in various conditions and advancing underwater monitoring and exploration technologies.

[...] Read more.
Development of User Story and Design Thinking Integration Teaching Model for Software Engineering Education

By Muhammad Ihsan Zul Suhaila Mohd. Yasin Dadang Syarif Sihabudin Sahid

DOI: https://doi.org/10.5815/ijitcs.2026.02.04, Pub. Date: 8 Apr. 2026

User stories (US) play a vital role in requirement engineering, yet they often encounter challenges such as ambiguity, inefficiency, and low quality. Many Indonesian universities face difficulties in equipping students with practical skills essential for crafting effective US, despite efforts to align curricula with industry standards. Moreover, existing approaches that integrate design thinking (DT) into educational settings are limited, as they either do not adequately emphasize the US or do not yet address the unique needs of educational contexts. This study presents an innovative US-DT integrated teaching model to enhance students’ experience developing industry-relevant user stories. Utilizing an action research methodology, the study incorporates surveys and literature reviews to guide the model's development. The model was tested with a sample of Indonesian software engineering undergraduate students, focusing on evaluating their satisfaction levels through metrics such as perceived usefulness (PU), learning motivation (LM), learning satisfaction (LS), and perceived ease of use (PEOU). The impact of the model was assessed via the Mann-Whitney U Test and Cliff’s Delta effect size, comparing it against regular teaching methods. Results demonstrate significant improvements in PU, LM, and LS, indicating effectiveness, although PEOU remains a key limitation requiring further refinement. Future research should focus on improving PEOU by refining teaching strategies, optimizing session management, introducing preparatory workshops, and extending the model’s application to different student groups to validate and broaden its educational impact. The findings suggest that adapting US and DT from industry can notably enrich student learning experiences.

[...] Read more.
Representation of Dynamic Basic Block in Software Evolution Using Incidence Matrix

By Rajeeb S. Bal Jibendu K. Mantri

DOI: https://doi.org/10.5815/ijitcs.2026.02.05, Pub. Date: 8 Apr. 2026

Software evolution is a continuous process that transforms changing user requirements into improved software systems. Establishing a clear and well-structured development process is widely recognized as an effective means to enhance software maintainability, quality, and productivity. Tailoring software processes from existing process patterns and standards is essential for improving process performance, ensuring product quality, reducing development risks, and minimizing rework. Despite its importance, current research lacks a systematic and formally grounded method for tailoring software evolution processes. In this paper, we propose a structured approach based on Petri Net (PN) theory to address this limitation. There are four fundamental process constructs: sequence, concurrency, selection, and iteration are identified as basic building blocks for modeling software evolution processes. Using these constructs, four tailoring operations, namely adding, deleting, splitting, and merging, are formally defined. We study on the scalable process composition, matrix-based representations of Petri Nets (PNs) are employed. Incidence and related matrices provide a concise and mathematically tractable representation of both place/transition nets and restricted PNs, enabling the identification of essential structural properties of software processes. Also, we prove the reachability analysis and firing rules are utilized to derive a mathematical behavioral notation that captures binary relationships between input and output variables. This notation facilitates precise analysis of dynamic behavior for systematic software process tailoring.

[...] Read more.
An Ontology Driven Machine Learning Framework for Early Prediction in Children with Cerebral Palsy

By Rahma Haouas Zahwanie Lilia Cheniti-Belcadhi Saoussen Layouni Ghada El Khayat

DOI: https://doi.org/10.5815/ijitcs.2026.02.06, Pub. Date: 8 Apr. 2026

Cerebral palsy (CP) is a neurological disorder that affects 2-3 in every 1,000 births worldwide. Early prediction of severity is vital for optimizing therapeutic interventions. This study introduces OntoML-CP, a novel hybrid intelligence framework that combines inductive machine learning with deductive ontology-based reasoning to predict Gross Motor Function Classification System (GMFCS) levels in children with CP. We present a hybrid architecture combining semantic features from a CP ontology and clinical data for machine learning, using ontological reasoning to refine predictions and improve clinical validity and interpretability. The clinical ontology built using OWL captures the relationships between symptoms of cerebral palsy, developmental disorders, and motor functions, enriched with clinical knowledge and FOAF to represent key stakeholders like patients, parents, and therapists. Using a synthetic dataset of 1,695 children with CP, generated by physical medicine and rehabilitation specialists based on real clinical cases and validated through expert review, we address demographic diversity and missing data through preprocessing techniques to correct class imbalance during model evaluation and selection. Seven supervised algorithms were evaluated, among which Random Forest and Gradient Boosting models achieved superior performance (accuracy: 85% and 83%), when augmented with our ontological framework. The models showed consistent performance across all GMFCS levels with macro-averaged F1-scores of 0.81 and 0.79, respectively, and maintained high sensitivity for severe cases (levels 4-5), significantly outperforming baseline models. The semantic layer enhances predictions with logical explanations and presents them through SPARQL queries and intuitive visual formats designed for healthcare professionals. Our ontology-driven approach provides medicine with not only accurate predictions but also context-aware, clinically interpretable explanations that support informed decisions and enable personalized, actionable CP severity predictions.

[...] Read more.
Sensor Data Fusion in Healthcare Monitoring System with Appropriate Rule-based Model for Error Reduction

By Vivek Sharma S. Mahesh Kaluti

DOI: https://doi.org/10.5815/ijitcs.2026.02.07, Pub. Date: 8 Apr. 2026

Healthcare monitoring System (HMS) is involved in the continuous and periodic evaluation of the patient or individuals. The HMS uses multiple sensors to monitor the health status of patients. However, the conventional methods are subjected to errors in the computation of data in the environment. Hence, this paper proposed an Optimized Rule-based Sugeno Fuzzy Hidden Markov Model Stacked Deep Learning (ORSF-HMM-SDL) for error reduction in the environment. The proposed ORSF-HMM-SDL model uses the Associative rule-based for the computation of the health status of patients. The proposed model utilizes the optimized fuzzy system for the estimation of the features with the classification using stacked deep learning. The ORSF-HMM-SDL uses the Sugeno Fuzzy interface model to assess the health status of patients. The proposed ORSF-HMM-SDL estimates the Hidden Markov Model (HMM) for the computation of features for the error computation. With the estimated features the ORSF-HMM-SDL model computes deep learning for the classification with the stacked model.  data fused with the sensor are applied and classified with the deep learning model. The simulation results demonstrated the effectiveness of different fusion techniques—ORSF-HMM-SDL Fusion, Kalman Filter Fusion, Deep Learning (DL) Based Fusion, and SVM Based Fusion—in healthcare monitoring systems. The study evaluates their performance using metrics such as accuracy, sensitivity, specificity, precision, recall, F1-score, error rate, latency, and throughput. The Deep Learning Fusion method achieves the highest accuracy of 96.5% for heart disease detection, 94.2% for diabetes, and 95.8% for hypertension, with an overall accuracy of 98.3% for healthy individuals. The method also records a high F1-score of 98.3% for healthy individuals, 94.2% for heart disease, and 91.7% for hypertension. In comparison, ORSF-HMM-SDL Fusion shows strong performance with an overall accuracy of 94.8%, sensitivity of 92.3%, and specificity of 96.1%, along with a low error rate of 5.2%. The Kalman Filter Fusion and SVM Based Fusion methods, while effective, show slightly lower performance across most metrics, with Kalman Filter achieving 93.0% accuracy and Deep Learning showing superior performance with 700 data points/min throughput. These findings demonstrate that while deep learning offers the highest overall performance.

[...] Read more.
PINNs for Stochastic Dynamics: Modeling Brownian Motion via Verlet Integration

By Yulison Herry Julian Evan Jeremia Oktavian Ferry Faizal

DOI: https://doi.org/10.5815/ijitcs.2026.02.08, Pub. Date: 8 Apr. 2026

This study presents a Physics-Informed Neural Network (PINN) framework for modeling stochastic systems like Brownian motion, designed to overcome critical challenges in physical consistency and numerical stability that affect classical solvers and standard data-driven models. Traditional numerical methods often struggle with high-dimensional spaces or sparse data, while many machine learning approaches fail to enforce fundamental physical laws. To address this, our proposed PINN architecture integrates a multi-component loss function that explicitly enforces the Fokker-Planck equation, which describes the system’s governing physics, alongside boundary conditions and a global probability conservation law. This physics-informed approach is anchored by high-fidelity training data generated from Verlet-integrated trajectories of the underlying Langevin dynamics. We validate our model against the analytical solution for one-dimensional Brownian motion, demonstrating its ability to accurately recover the true probability density function (PDF). Rigorous comparisons using statistical metrics show superior accuracy over a canonical data-driven operator learning model, DeepONet. Specifically, our PINN achieves a relative L2 error of 5.66% and maintains probability normalization within a 0.03% tolerance, significantly outperforming DeepONet’s 32.46% error and 3.2% probability deviation. Furthermore, a recursive error-bounding technique provides quantifiable confidence in the model’s predictions. While validated in a low-dimensional system, our framework demonstrates a promising and robust methodology for problems in fields like soft matter physics and financial modeling, where both physical consistency and data-driven flexibility are crucial. We also provide a transparent analysis of the model’s computational trade-offs, positioning this physics-informed approach as a reliable tool for complex scientific applications.

[...] Read more.
Lightweight and Explainable Neural Models for Multilingual Movie Script Certification

By Pratik N. Kalamkar Prasadu Peddi Yogesh K. Sharma

DOI: https://doi.org/10.5815/ijitcs.2026.02.09, Pub. Date: 8 Apr. 2026

Automated film certification remains an underexplored regulatory challenge, requiring scalable yet transparent models capable of handling full-length multilingual scripts. This paper presents a unified framework that delivers lightweight, explainable, and calibrated neural classifiers for multilingual movie script certification in English, Hindi, and Marathi. Unlike prior studies that operate on short snippets or monolingual text, our approach models entire scripts through chunk-level transformer encoding, knowledge distillation, and file-level temperature calibration, coupled with explainability-guided rule mapping for interpretable decision refinement. The proposed pipeline systematically integrates six stages—baselines, teacher modeling, distillation, calibration, explainability, and rule enrichment—yielding a compact yet trustworthy system. Experiments show that the distilled students retain over 85% of teacher accuracy while being 3× smaller, and temperature scaling substantially improves reliability (English Expected Calibration Error 0.303→0.086, Brier 0.684→0.540). Faithfulness analysis using deletion Area Under Curve confirms interpretable token attributions (0.157, 0.239, and 0.258 for English, Hindi, and Marathi respectively). Moreover, rule integration improves accuracy (English 0.581→0.587) while offering human-auditable rationales. All models are deployment-feasible, exported to ONNX/TorchScript with 3.5× compression (545 MB→150 MB) and no performance loss. Together, these results establish a reproducible, end-to-end pipeline that works multilingual long-document modeling, calibration, and interpretability for film certification—advancing trustworthy Artificial Intelligence in regulatory Natural Language Processing. To our knowledge, this is the first work to build a unified, multilingual, and explainable pipeline for movie script certification using full-length scripts across MPAA and CBFC regulatory settings.

[...] Read more.
Hepatitis C Diagnosis using Supervised Machine Learning Algorithms and Ensemble Learning Techniques

By Karthika Natarajan Koteswara Rao Makkena

DOI: https://doi.org/10.5815/ijitcs.2026.02.10, Pub. Date: 8 Apr. 2026

Hepatitis, a severe and highly impactful disease, poses significant challenges for healthcare systems, including limited diagnostic resources, delayed detection, and inadequate treatment infrastructure. This work addresses these issues by developing a machine-learning predictive system to classify hepatitis severity. By employing Logistic Regression, Random Forest, SVM, KNN, and ensemble techniques such as AdaBoost, CatBoost, and Gradient Boosting, the system enhances early detection and severity assessment. The issue of class imbalance was addressed using ADASYN and SMOTE methods applied to two separate datasets. For Dataset 1, following the use of the ADASYN technique, the achieved accuracies were 88.11% for Logistic Regression, 98.92% for Random Forest, 97.30% for AdaBoost, and 96.22% for Gradient Boosting. When SMOTE was employed on Dataset 1, Random Forest and Gradient Boosting reached accuracies of 98.38% and 96.76%, respectively. In the case of Dataset 2, AdaBoost achieved an accuracy of 93.75% after applying both ADASYN and SMOTE. These models analyze clinical data to deliver accurate, timely predictions, reducing the burden on resource-constrained healthcare systems. Ensemble methods enhance model robustness and accuracy, supporting improved decision-making and efficient resource allocation. Furthermore, SHAP offers global explanations of feature importance and force plots for local interpretations, while LIME increases the interpretability of results from black-box models, facilitating effective hepatitis management. Future work will focus on integrating interoperability standards, such as HL7 FHIR, to enable real-time data exchange, facilitating seamless risk assessment and clinical decision support within healthcare workflows.

[...] Read more.
Accident Detection and Estimation of Vehicle Speed and Count by Type in Road CCTV Images Using Machine Vision

By I. Kadek Rai Pramana I. Putu Agung Bayupati Gusti Made Arya Sasmita Ngoc Le

DOI: https://doi.org/10.5815/ijitcs.2026.02.11, Pub. Date: 8 Apr. 2026

This study presents an integrated traffic monitoring system for accident detection, vehicle counting by type, and vehicle speed estimation using roadside Closed-Circuit Television (CCTV) footage and machine vision based on the YOLOv11 architecture. The proposed methodology comprises data collection from heterogeneous sources, data preprocessing and augmentation, model fine-tuning on a custom Vehicle–Accident dataset, system deployment through a web-based application, and real-world evaluation. The YOLOv11 models were optimized to detect multiple vehicle categories and clearly defined accident classes under real traffic conditions. Experimental results indicate that the YOLOv11 Large (l) model achieves superior detection performance, with 81.8% precision, 75.8% recall, 82.1% mAP50, and 53.3% mAP50–95. Real-world testing further confirms its effectiveness, yielding an object detection accuracy of 99.24% and low speed estimation errors, with Mean Absolute Percentage Error (MAPE) of 3.56% for video-based evaluation and 5.54% for real-time evaluation. In contrast, the YOLOv11 Nano (n) model offers faster inference and lower computational requirements but exhibits reduced robustness in complex accident scenarios. The trained models are deployed in an interactive web application supporting image, video, and real-time inputs, enabling practical traffic monitoring and decision support. Overall, the YOLOv11l-Vehicle-Accident model is identified as the most suitable configuration for accuracy-critical traffic management systems, while Nano variants are better suited for resource-constrained deployments.

[...] Read more.
Mobile OTP Authentication Protocol Design and Implementation for Local Federated Clients to Federated Central Server via MQTT

By Narendra Babu Pamula Ajoy Kumar Khan Arnidam Sarkar

DOI: https://doi.org/10.5815/ijitcs.2026.02.12, Pub. Date: 8 Apr. 2026

Strong and effective authentication methods are more important than ever in the ever-changing field of cybersecurity. In this work, a Mobile One-Time Password (OTP) Authentication Protocol designed for local federated clients utilizing the Message Queuing Telemetry Transport (MQTT) protocol to communicate with a federated central server is designed and implemented. This protocol strengthens the security foundation of federated systems by ensuring the safe and dependable delivery of OTPs while utilizing the lightweight and effective characteristics of MQTT. The suggested protocol tackles the scalability, security, and latency issues that come with federated setups. We show how the protocol can effectively mitigate possible security threats, like replay attacks and illegal access, while maintaining user convenience through a thorough analysis and implementation. Our protocol strikes a balance between security and performance, according to experimental results, which makes it a workable answer for modern federated authentication requirements.

[...] Read more.