IJISA Vol. 17, No. 4, Aug. 2025
Cover page and Table of Contents: PDF (size: 201KB)
REGULAR PAPERS
This paper presents an Enhanced Adaptive B-Spline Smoothing approach for UAV path planning in complex three-dimensional environments. By leveraging the inherent local control and smoothness properties of cubic B-Splines, the proposed method integrates an adaptive knot selection mechanism—optimized via a genetic algorithm—with curvature-aware control point refinement to generate dynamically feasible and smooth flight paths. Simulation studies in a cluttered 3D airspace show that the proposed technique reduces path length and lowers maximum curvature compared to uniform and chord-length-based B-Spline strategies. Despite a moderate computational overhead, the results demonstrate smoother, more stable flight trajectories that adhere to aerodynamic constraints and ensure safe obstacle avoidance. This approach is particularly valuable for near-real-time missions, where flight stability, rapid re-planning, and energy efficiency are paramount. Results emphasize the potential of the proposed method for improving UAV navigation in various applications—such as urban logistics, infrastructure inspection, and search-and-rescue—by providing better maneuverability, reduced energy consumption, and increased operational safety to the UAV agents.
[...] Read more.This article introduces a novel variational approach for solving the inverse geodesic problem on a transcendental surface shaped as a cylindrical structure with a cycloidal generatrix, a type of geometry that has not been previously studied in this context. Unlike classical models that rely on symmetric surfaces such as spheres or spheroids, this method formulates the geodesic path as a functional minimization problem. By applying the Euler–Lagrange equation, an analytical integration of the corresponding second-order differential equation is achieved, resulting in a parametric expression that satisfies boundary conditions. The effectiveness of the proposed method for computing geodesic curves on transcendental surfaces has been rigorously evaluated through a series of numerical experiments. Analytical validation has been carried out using MathCad, while simulation and three-dimensional visualization have been implemented in Python. Numerical experiments are conducted and 3D visualizations of the geodesic lines are presented for multiple point pairs on the surface, demonstrating the accuracy and computational efficiency of the proposed solution. This enables a closed-form analytical representation of the geodesic curve, significantly reducing computational complexity compared to existing numerical-heuristic methods.
The obtained results offer clear advantages over existing studies in the field of computational geometry and variational calculus. Specifically, the proposed method enables the construction of geodesic curves on complex transcendental surfaces where traditional methods either fail or require intensive numerical approximation.
The analytical integration of geodesic equations enhances both accuracy and performance, achieving an average computational cost reduction of approximately 27-30% and accuracy improvement of around 20% in comparison with previous models utilizing non-polynomial metrics. These enhancements are especially relevant in applications requiring real-time response and precision, such as robotics, CAD systems, computer graphics, and virtual environment simulation. The method’s ability to deliver compact and exact solutions for boundary value problems positions it as a valuable contribution for both theoretical and applied sciences.
This article presents a new multi-objective model that optimizes Kafka configuration to minimize end-to-end latency while quantifying independent parameter influence, interaction effects and sensitivity to local parameter changes. The proposed model addresses a challenging problem of selecting the configuration to prevent overloading while maintaining high availability and low latency of Kafka cluster. The study proposes an algorithm to implement this model using an adaptive optimization strategy that combines gradient-based and derivative-free search methods. This strategy enables a balance between convergence speed and global search capabilities, which is critical when dealing with the nonlinear parameter space characteristic of large-scale Kafka deployments. Experimental evaluation demonstrates 99% accuracy of the model verified against a trained XGBRegressor model and tested across multiple optimization strategies. The experimental results show that alternative configurations can be selected to meet secondary objectives-such as operational constraints - without significantly impacting latency. In this context, the designed multi-objective model serves as a valuable tool to guide the configuration selection process by quantifying and incorporating such secondary objectives into the optimization landscape. The proposed multi-objective function could be adopted in real time applications as a tool for Kafka performance tuning.
[...] Read more.This study investigates the enhancement of the YOLOv5 model for price tag detection in retail environments, aiming to improve both accuracy and robustness. The research utilizes the "Price Tag Detection" dataset from SOVAR, which contains 1,073 annotated images covering four classes: price tags, labels, prices, and products and is split into training, validation, and test sets with extensive preprocessing and augmentation such as resizing, rotation, color adjustments, blur, noise, and bounding box transformations. Several modifications to the YOLOv5 architecture were proposed, including advanced image augmentation techniques to simulate real-world variations in lighting and noise, enhanced anchor box optimization through K-means clustering on the dataset annotations to better fit typical price tag shapes, and the integration of the Convolutional Block Attention Module (CBAM) to enable the model to selectively focus on relevant spatial and channel-wise features. The combined application of these enhancements resulted in a substantial improvement, with the model achieving a mean Average Precision (mAP) of 96.8% at IoU 0.5 compared to the baseline YOLOv5's 92.5%. The attention mechanism and optimized anchor boxes notably improved detection of small, partially occluded, and diverse price tags, highlighting the effectiveness of combining data-driven augmentation, architectural tuning, and attention mechanisms to address the challenges posed by cluttered and dynamic retail scenes.
[...] Read more.This paper presents a comprehensive framework for intelligent and personalized task scheduling based on Transformer architectures and contextual-behavioral feature modeling. The proposed system processes sequences of user activity enriched with temporal, spatial, and behavioral information to generate structured task representations. Each predicted task includes six key attributes: task type, execution time window, estimated duration, execution context, confidence score, and priority level. By leveraging Transformer encoders, the model effectively captures long-range temporal dependencies while enabling parallel processing, which significantly improves both scalability and responsiveness compared to recurrent approaches.
The system is designed to support real-time adaptation by integrating diverse data sources such as device activity, location, calendar status, and behavioral metrics. A modular architecture enables input encoding, multi-head self-attention, and global behavior summarization for downstream task generation. Experimental evaluation using artificially generated user data illustrates the model’s ability to maintain high accuracy in task type and timing prediction, with consistent performance under varying contextual conditions. The proposed approach is applicable in domains such as digital productivity, cognitive workload balancing, and proactive time management, where adaptive and interpretable planning is essential.
This paper introduces a deterministic insertion-based heuristic named the Localized Selective Insertion Heuristic, which incorporates adaptive mechanisms such as dynamic adjustment of evaluated neighbors and systematic seed route initialization, contributing to the heuristic's novelty and robust performance, designed to provide a reliable balance between the quality of solutions and computational efficiency. The proposed heuristic builds a complete solution incrementally, systematically inserting each unvisited node into an evolving tour by evaluating a limited number of potential insertion points based on their spatial proximity to already visited locations. This localized and selective evaluation strategy substantially reduces computational effort, typically allowing large problem instances to be solved in under 150 milliseconds, with achieved solution quality consistently within 2–14% of known optimal values. To clearly illustrate the effectiveness of this trade-off, we propose a Normalized Performance Index, integrating both solution accuracy and computational speed into a unified metric. The Localized Selective Insertion Heuristic demonstrated superior performance according to this index, achieving the best score in 16 out of 17 tested benchmark scenarios. The simplicity, deterministic nature, minimal parameter sensitivity, and ease of practical implementation make the proposed approach particularly suitable for applications requiring scalability, consistent performance, and straightforward reproducibility, such as logistics, transportation planning, and industrial automation.
[...] Read more.Since big data streams contain hidden meanings, there is a permanent motivation to store and process them. However, storing and processing this data requires special methods and tools. Today, the most effective approach for this situation is distributed computing mechanisms. However, this approach is economically expensive, since it requires a lot of computing resources. Therefore, users who do not have economic capabilities strive to solve problems with large data streams on a single server. In this situation, a sharp drop in efficiency in terms of time is observed. However, even for a single computing machine, the use of an internal distribution mechanism can lead to efficiency in terms of time. In this case, efficiency depends on several indicators, the most important of which is determining the number of effective distributions. However, determining the number of effective distributions is a complex process. To solve this problem, this research paper considers the use of artificial intelligence algorithms. First of all, the research methodology is developed and processes that are in it are explained. In the next step, Random Forest, XGBoost, Support Vector Regression, and Multiple Linear Regression algorithms are tested to determine the number of effective distributions. In order to improve the accuracy of the study, a multilayer neural network is improved, that is, a neural network ensemble method is developed that combines the above machine learning algorithms. At the end of the study, the research results are presented and explained in detail.
[...] Read more.