Open Access Journals

Explore All Journals...

Recent Articles >> More

GMM-based Imbalanced Fractional Whale Particle Filter for Multiple Object Tracking in Surveillance Videos

By Avinash Ratre

DOI: https://doi.org/10.5815/ijcnis.2025.02.03, Pub. Date: 8 Apr. 2025

The imbalanced surveillance video dataset consists of majority and minority classes as normal and anomalous instances in the nonlinear and non-Gaussian framework. The normal and anomalous instances cause majority and minority samples or particles associated with high and low probable regions when considering the standard particle filter. The minority particles tend to be at high risk of being suppressed by the majority particles, as the proposal probability density function (pdf) encourages the highly probable regions of the input data space to remain a biased distribution. The standard particle filter-based tracker afflicts with sample degeneration and sample impoverishment due to the biased proposal pdf ignoring the minority particles. The difficulty in designing the correct proposal pdf prevents particle filter-based tracking in the imbalanced video data. The existing methods do not discuss the imbalanced nature of particle filter-based tracking. To alleviate this problem and tracking challenges, this paper proposes a novel fractional whale particle filter (FWPF) that fuses the fractional calculus-based whale optimization algorithm (FWOA) and the standard particle filter under weighted sum rule fusion. Integrating the FWPF with an iterative Gaussian mixture model (GMM) with unbiased sample variance and sample mean allows the proposal pdf to be adaptive to the imbalanced video data. The adaptive proposal pdf leads the FWPF to a minimum variance unbiased estimator for effectively detecting and tracking multiple objects in the imbalanced video data. The fractional calculus up to the first four terms makes the FWOA a local and global search operator with inherent memory property. The fractional calculus in the FWOA oversamples minority particles to be diversified with multiple imputations to eliminate data distortion with low bias and low variance. The proposed FWPF presents a novel imbalance evaluation metric, tracking distance correlation for the imbalanced tracking over UCSD surveillance video data and shows greater efficacy in mitigating the effects of the imbalanced nature of video data compared to other existing methods. The proposed method also outshines the existing methods regarding precision and accuracy in tracking multiple objects. The consistent tracking distance correlation near zero values provides efficient imbalance reduction through bias-variance correction compared to the existing methods.

[...] Read more.
Query Auto-Completion Using Relational Graph Convolutional Network in Heterogeneous Graphs

By Vidya Dandagi Nandini Sidnal Jayashree Kulkarni

DOI: https://doi.org/10.5815/ijeme.2025.02.02, Pub. Date: 8 Apr. 2025

Search engine acts as an interface between users and computers. Online search is a very quick and impactful evolution of human experience. It is becoming a key technology that people rely on every day to get information about almost everything. Searching is typically performed with a common purpose underlying the query. If the user does not know the knowledge of the keywords to be searched, spends more time to frame the query. The search may not contain the user’s intended answers. Understanding the meaning of the query given by the user is the important role of the search engine. The query auto-completion feature is important for search engines. The query auto-completion process occurs uninterruptedly, dynamically listing terms with each click. It provides recommendations that facilitate query formulation and improve the relevancy of the search. Graphs and additional data structures are used frequently in computer science and related fields. The applications of graph machine learning include data recovery, friendship recommendation, and social networking. Heterogeneous graphs (HGs) consist of different kinds of nodes and links, and are useful for defining a wide range of complicated real-world systems a robust graph neural architecture for encoding a knowledge graph is the Relational Graph Convolutional Network (R-GCN).The proposed model uses the supervised Relational Graph Convolutional Network(R-GCN), Long Short-Term Memory (LSTM) for completion of the query. The model predicts the object given the subject and predicate and the accuracy is 92.4%.

[...] Read more.
Liquefaction Susceptibility of Earthworks Site: Numerical Modelling of Embankment Cut by PLAXIS2D and GEOSLOPE Using Eurocode 7

By Sheeraz Ahmed Rahu Zaheer Ahmed Almani Muhammad Rehan Hakro

DOI: https://doi.org/10.5815/ijem.2025.02.01, Pub. Date: 8 Apr. 2025

This paper presents a comprehensive study encompassing both liquefaction susceptibility evaluation and slope stability analysis of embankment soil adjacent to road. The paper focuses on the vulnerability of embankment soil to liquefaction-related failures. It examines the liquefaction vulnerability of embankments ML-CL soil, previously considered non-liquefiable but raising concerns post-1999 Kocaeli Earthquake. The paper evaluates liquefaction susceptibility using Chinese Criteria and Modified Chinese Criteria based on index test results of embankments soil samples. The soil at various depths was found to be not susceptible to liquefaction as per Chinese criteria, whereas the second evaluation as per Modified Chinese criteria gave different and more specific results taking into the % clay-sized particles. Based on Modified Chinese criteria, the soils ranging from 10-20 ft, 25-30 ft and 30-35 ft were found explicitly non-susceptible, whereas soil ranging from 0-10 ft and 20-25 ft requires further study on non-plastic clay-sized grains as per the criteria. The paper delves into slope stability analysis using PLAXIS2D and GEOSLOPE software to determine the optimal earthworks layout for a embankment excavation based on Eurocode 7. Upon numerical modelling, various trials were carried out considering various factors of safety across different earthworks layouts and the one with satisfying factor of safety is considered safest, ensuring safety and cost-effectiveness of embankment cut besides the road.

[...] Read more.
Design of an Efficient UNet-Based Transfer Learning Model for Enhancing Skin Cancer Segmentation and Classification Performance

By Namrata Verma Pankaj Kumar Mishra

DOI: https://doi.org/10.5815/ijigsp.2025.02.05, Pub. Date: 8 Apr. 2025

Accurate and efficient segmentation and classification are indispensable for the early diagnosis and treatment of skin cancer, a common and potentially fatal condition. Combining the UNet architecture with Auto Encoders for robust skin cancer segmentation, followed by binary cascade Convolutional Neural Networks (CNNs). In this text, we present a novel method for accurately classifying melanoma and basal cell carcinoma. Existing models are limited in their ability to achieve high precision, accuracy, and recall rates while maintaining a high Peak Signal-to-Noise Ratio (PSNR) for accurate image reconstruction, which necessitates this research. Our proposed model overcomes these limitations and performs exceptionally well on datasets: ISIC, HAM10000, PH2 Dataset, and Dermofit Image Libraries. When UNet and Auto Encoders are used, the advantages of both architectures are combined. The UNet architecture, renowned for its superior performance in image segmentation tasks, provides a solid foundation for separating skin cancer regions from surrounding tissue. The Auto Encoder component simultaneously facilitates feature extraction and image reconstruction, leading to improved representation learning and segmentation results. Utilizing the complementary capabilities of these models, our method improves the accuracy and efficiency of skin cancer segmentations. Using binary cascade CNNs for classification also improves our model's performance. The binary cascade architecture employs a hierarchical classification method that iteratively improves classification choices at each stage. This facilitates the differentiation between basal cell carcinoma, melanoma, and melanocytic nevi, resulting in highly accurate and trustworthy predictions. Extensive experiments were conducted on the ISIC, HAM10000, PH2 Dataset, and Dermofit Image Library to evaluate the performance of our proposed model. The achieved precision of 99.2%, accuracy of 98.3%, recall of 98.9%, and PSNR greater than 42dB demonstrate the superior functionality and effectiveness of our strategy. These results suggest that our model has a great deal of potential for assisting dermatologists in the early identification and classification of skin cancer, ultimately leading to improved patient outcomes. The combination of UNet with Auto Encoders and binary cascade CNNs has proven effective for segmenting and classifying skin cancer. Our proposed model outperforms current methods in terms of precision, accuracy, recall, and PSNR, demonstrating its potential to have a significant impact on the field of dermatology and aid in the early detection and treatment of skin cancers.

[...] Read more.
Development of Past Learning Recognition Assessment Data Processing System for Professional Engineer Program Using Scrum Method

By Trisya Septiana Dikpride Despa Fadil Hamdani Deny Budiyanto Reza Andrea

DOI: https://doi.org/10.5815/ijieeb.2025.02.03, Pub. Date: 8 Apr. 2025

The University of Lampung is one of the universities mandated to run the Professional Engineer Program (PPI) through the Past Learning Recognition (RPL) pathway. Individuals following this RPL path must have at least five years of experience in the engineering field, where their education, work, and training data from formal and informal institutions can be converted into six courses totaling 24 credits. The RPL data assessment process, if conducted manually, takes a long time and hampers the administrative process in PPI. Therefore, an effective and efficient assessment process is automated through a web-based application by developing an RPL data final grade processing system (E-RAPEL), which addresses common problems in PPI and facilitates grade administration. The system development adopts the Scrum method to enhance product performance, teamwork, and the work environment. Data collection in this study was conducted through interviews and direct observation, and the results indicate that the system facilitates the final assessment process of RPL data using black box testing. The findings show that all test components functioned as expected and reduced the time required for the RPL data final assessment process in PPI.

[...] Read more.
Unveiling Autism: Machine Learning-based Autism Spectrum Disorder Detection through MRI Analysis

By Chitta Hrudaya Neeharika Yeklur Mohammed Riyazuddin

DOI: https://doi.org/10.5815/ijitcs.2025.02.02, Pub. Date: 8 Apr. 2025

The prediction of autism features in relation to age groups has not been definitively addressed, despite the fact that several studies have been conducted using various methodologies. Research in the field of neuroscience has demonstrated that intracranial brain volume and the corpus callosum provide crucial information for the identification of autism spectrum disorder (ASD). Based on these findings, we present Decision Tree-based Autism Prediction System (DT-APS) and Random Forest-based Autism Prediction System (RF-APS) for automatic ASD identification in this paper. These systems utilize characteristics extracted from the corpus callosum and intracranial brain volume, and are based on machine learning techniques. By prioritizing characteristics with the highest discriminatory power for ASD classification, our proposed approaches, DT-APS and RF-APS, have not only enhanced identification accuracy but also simplified the training of machine learning models. The initial step of this method involves dividing each MRI scan into distinct anatomical areas. These areas are adjacent slices in a single 2D image. Each 2D image is mapped to the curvelet space, and the set of GGD parameters characterizes each of the distinct curvelet sub-bands. The AQ-10 dataset was utilized to evaluate the proposed model. When tested on both types of datasets, the suggested prediction model demonstrated superior performance compared to alternative approaches in all relevant metrics, including accuracy, specificity, sensitivity, precision, and false positive rate (FPR).

[...] Read more.
Performance Analysis of Shallow and Deep Learning Classifiers Leveraging the CICIDS 2017 Dataset

By Edosa Osa Emmanuel J. Edifon Solomon Igori

DOI: https://doi.org/10.5815/ijisa.2025.02.04, Pub. Date: 8 Apr. 2025

In order to implement the advantages of machine learning in the cybersecurity ecosystem, various anomaly detection-based models are being developed owing to their ability to flag zero-day attacks over their signature-based counterparts. The development of these anomaly detection-based models depends heavily on the dataset being employed in terms of factors such as wide attack pool or diversity. The CICIDS 2017 stands out as a relevant dataset in this regard. This work involves an analytical comparison of the performances by selected shallow machine learning algorithms as well as a deep learning algorithm leveraging the CICIDS 2017 dataset. The dataset was imported, pre-processed and necessary feature selection and engineering carried out for the shallow learning and deep learning scenarios respectively. Outcomes from the study show that the deep learning model presented the highest performance of all with respect to accuracy score, having percentage value as high as 99.71% but took the longest time to process with 550 seconds. Furthermore, some shallow learning classifiers such as Decision Tree and Random Forest took less processing time (4.567 and 3.95 seconds respectively) but had slightly less accuracy scores than the deep learning model with the CICIDS 2017 dataset. Results from our study show that Deep Neural Network is a viable model for intrusion detection with the CICIDS 2017 dataset. Furthermore, the results of this study are to provide information that may influence choices while developing machine learning based intrusion detection systems with the CICIDS 2017 dataset.

[...] Read more.
Matrix Approach to Rough Sets Based on Tolerance Relation

By N. Kishore Kumar M. P. K. Kishore S. K. Vali

DOI: https://doi.org/10.5815/ijmsc.2025.01.02, Pub. Date: 8 Apr. 2025

There are many complex issues with incomplete data to make decisions in the field of computer science. These issues can be resolved with the aid of mathematical instruments. When dealing with incomplete data, rough set theory is a useful technique. In the classical rough set theory the information granules are equivalence classes. However, in real life scenario tolerance relations play a major role. By employing rough sets with Maximal Compatibility Blocks (MCBs) rather than equivalence classes, we were able to handle the challenges in this research with ease. A novel approach to define matrices on MCBs and operations on them is proposed. Additionally, applied the rough matrix approach to locate a consistent block related to any set in the universal set.

[...] Read more.
AI in Education: A Decade of Global Research Trends and Future Directions

By Dedy Irfan Ronal Watrianthos Faizal Amin Nur Bin Yunus

DOI: https://doi.org/10.5815/ijmecs.2025.02.07, Pub. Date: 8 Apr. 2025

This article addresses the need for a comprehensive understanding of the rapidly evolving field of Artificial Intelligence (AI) in education, given its potential to transform teaching and learning practices. The study analyzed 1,234 articles from the Web of Science database, using bibliometric techniques and topic modeling. Quantitative analyses of publication trends, citation impacts, and collaboration patterns were conducted using the R programming language, and Latent Dirichlet Allocation (LDA) was employed to uncover latent themes and potential research gaps. The study reveals a dramatic growth in research output, with an annual growth rate of 47.9%. China and the United States emerge as dominant contributors, collectively accounting for 38% of publications. Key research themes include AI in language learning, AI ethics and policy, and AI literacy. The findings highlight the need for more inclusive and diverse research efforts to address the unique challenges and opportunities of AI in education in across socioeconomic contexts.

[...] Read more.
Enhancing Sensor Node Energy Efficiency in Wireless Sensor Networks through an Adaptable Power Allocation Framework

By M. S. Muthukkumar C. M. Arun Kumar

DOI: https://doi.org/10.5815/ijwmt.2025.02.01, Pub. Date: 8 Apr. 2025

In the realm of Wireless Sensor Networks (WSN), approaches to managing power are generally divided into two main strategies: reducing power consumption and optimizing power distribution. Power reduction strategies focus on creating a path for data packets between the sink and destination nodes that minimizes the distance and, consequently, the number of hops required. In contrast, power optimization strategies seek to enhance data transfer efficiency without splitting the network into disconnected segments. Adjusting the data path to balance power often leads to longer routes, which can shorten the network's lifespan. Conversely, opting for the shortest possible path tends to result in a densely packed network structure. The newly proposed Adaptable Power Allocation Framework (APAF) aims to improve energy-efficient routing by simultaneously addressing both power balance optimization and the management of the data packet path. Unlike conventional routing methods, which primarily focus on the shortest path, APAF designs the data pathway by taking into account both the least amount of data transmission and the equilibrium of power distribution and balancing. Through a focus on power balance optimization and intelligent data path management, it demonstrates its effectiveness in improving energy-efficient routing. This study introduces the Adaptable Power Allocation Framework (APAF), which improves energy-efficient routing in WSNs by balancing power consumption and optimizing the data path. APAF is compared with traditional methods (LEACH, Swarm Optimization), showing a 20-30% improvement in data loss reduction and extending network lifespan.

[...] Read more.

More...