IJISA Vol. 17, No. 5, Oct. 2025
Cover page and Table of Contents: PDF (size: 331KB)
REGULAR PAPERS
Kidney stones are solid mineral and salt deposits formed within the kidneys, causing excruciating discomfort and pain when they obstruct the urinary tract. The presence of speckle noise in CT-scan images, coupled with the limitations of manual interpretation, makes kidney stone detection challenging and highlighting the need for precise and efficient diagnosis. This research investigates the efficacy of YOLOv8 models for kidney stone detection, aiming to strike a balance between computational efficiency and detection accuracy. This study's novel evaluation framework and practical deployment considerations underscore its contributions to advance kidney stone detection technologies. It evaluates five YOLOv8 variants (nano, small, medium, large, and extra-large) using standard metrics such as precision, recall, F1-score, and mAP@50, alongside computational resources like training time, power consumption, and memory usage. The comprehensive evaluation reveals that while YOLOv8s and YOLOv8e demonstrate superior performance in traditional metrics, YOLOv8s emerges as the optimal model, offering a harmonious balance with its high precision (0.917), highest mAP@50 (0.918), moderate power consumption (150W), and efficient memory usage. Graphical analyses further elucidate the behaviour of each model across different confidence thresholds, confirming the robustness of YOLOv8s. Additionally, this research explores the impact of model size and complexity on inference speed, demonstrating that smaller YOLOv8 variants achieve real-time performance with minimal latency. The study also introduces a method for model scalability, allowing for adjustments in accuracy and computational demand based on specific clinical or resource constraints. These contributions further emphasize the importance of holistic model assessment for real-world medical applications.
[...] Read more.In this article, novel mixture of conditional volatility models of Generalized Autoregressive Conditional Heteroscedasticity (GARCH); Exponential GARCH (EGARCH); Glosten, Jagannathan, and Runkle GARCH (GJR-GARCH); and dependent variable-GARCH (TGARCH) were thoroughly expounded in a Bayesian paradigm. Expectation-Maximization (EM) algorithm was employed as the parameter estimation technique to work-out posterior distributions of the involved hyper-parameters after setting-up their corresponding prior distributions. Mode was considered as the stable location parameter instead of the mean, because it could robustly adapt to symmetric, skewedness, heteroscedasticity and multimodality effects simulteanously needed to redefine switching conditional variance processes conceived as mixture components based on shifting number of modes in the marginal density of Skewed Generalized Error Distribution (SGED) set as the prior random noise.
In application to ten (10) most used cryptocurrency coins and tokens via their daily open, high, low, close and volume converted and transacted in USD from the same date of inception. Binance Coin (BNB) via its daily lower units transacted in USD (that is, low-BNB), yielded the most reduced Deviance Information Criteria (DIC) of 3651.1935. The low-BNB process yielded a two-regime process of TGARCH, that is, Mixture dependent variable-GARCH (TGARCH (2: 2, 2)) with stable probabilities of 33% and 66% respectively. The first regime was attributed with low unconditional volatility of 16.96664, while the second regime was traded with high unconditional volatility of 585.6190. In summary, Binance Coin (BNB) was a mixture of tranquil market conditions and stormy market conditions. Implicatively, this implies that the first regime of the low-BNB was characterized with strong fluctuating reaction to past negative daily returns of low-BNB converted to USD, while the second regime was attributed with weak fluctuating reaction. Additionally, the first regime was attributed with low repetitive volatility process, while the second regime was characterized with high persistence fluctuating process. For financial and economic decision-making, crypocurrency users and financial bodies should look-out for financial and economic sabotage agents, like war, exchange rate instability, political crises, inflation, browsing network fluctuation etc. that arose, declined or fluctuated doing the ten (10) years to study of the coins and tokens to ascertain which of this/these agent(s) contributed to the volatility process.
Mixture models from a Bayesian perspective were of interest because; some of the classical (traditional) models cannot accommodate and absolve regime-switching traits, and as well contain prior information known about cryptocurrency coins and tokens. In light of model performance, DIC values were compared on the basis of most performed to less perform via lower to higher values of DICs.
Predicting attitudes towards people with tuberculosis is a solution for preserving public health and a means of strengthening social ties to improve resilience to health threats. The assessment of attitudes towards the sick in general is essential to understand the educational level of a given population and to measure its resilience in contributing to the health of all within the framework of community life. The case of tuberculosis is chosen in this study to highlight the need for a change in attitudes, particularly due to the preponderance of this disease in Africa. While it is clear that attitudes influence the organization of individuals and community life, it remains a challenge to put in place an effective mechanism for evaluating the metrics that contribute to determining the attitude towards people with tuberculosis. Knowledge of attitudes towards any disease is very important to understanding collective values on this disease, hence the need to predict attitudes in the case of tuberculosis in favor of health education for all social strata while targeting areas of training not yet explored or requiring capacity building among populations. Changing attitudes towards tuberculosis patients will contribute to preserving public health and will help reduce stigma, improve understanding of the disease and encourage supportive and preventive behaviors. Achieving these changes involves dismantling stereotypes, improving access to care, mobilizing the media and social networks, including people with TB in society and strengthening the commitment of public authorities. The approach adopted consists of assessing the state of attitude towards tuberculosis patients at a given time and in a specific space based on the characteristics of the different social strata living there. An analysis of several metrics provided by machine learning algorithms makes it possible to identify differences in attitudes and serve as a decision-making aid on the strategies to be implemented. This work also relies on the investigation and analysis of historical trends using machine learning algorithms to understand population attitudes towards tuberculosis patients.
[...] Read more.Heart attacks continue to be one of the primary causes of death globally, highlighting the critical need for advanced predictive models to improve early diagnosis and timely intervention. This study presents a comprehensive machine learning (ML) approach to heart attack prediction, integrating multiple datasets from diverse sources to construct a robust and accurate predictive model. The research employs a stacking ensemble model, which combines the strengths of individual ML algorithms to improve overall performance. Extensive data preprocessing steps were carefully undertaken to preserve the dataset's integrity and maintain its quality. The results demonstrate a superior accuracy of 97.48%, significantly outperforming state-of-the-art approaches. The high level of accuracy indicates the model’s potential effectiveness in the clinical setting for early detection of heart attack and prevention. However, the proposed model is influenced by the quality and diversity of the integrated datasets, which could affect its generalizability across broader populations. Challenges encountered during the model's development include optimizing hyperparameters for multiple classifiers, ensuring data preprocessing consistency, and balancing computational efficiency with model interpretability. The results underscore the pivotal contribution of advanced ML approaches in revolutionizing the management of cardiovascular attack. By addressing the complexities and variabilities inherent in heart attack prediction, the work provides a pathway towards more effective and personalized cardiovascular disease management strategies, demonstrating the transformative potential of ML in healthcare.
[...] Read more.Arrhythmias are irregularities in heartbeats and hence accurate classification of arrhythmia has great importance for administering patients to the right cardiac care. This paper presents a five-class arrhythmia classification framework using Encoded Transformer (ET) based Convolutional Neural Network and Long Short-Term Memory (CNN-ET-LSTM) hybrid model to ECG signal. The dataset used in this research is the widely used MIT-BIH arrhythmia database that has five distinct types of arrhythmia: non-ectopic beats (N), ventricular ectopic beats (V), supraventricular ectopic beats (S), fusion beats (F), and unknown beats (Q). The class imbalance problem is dealt by utilizing Synthetic Minority Oversampling Technique (SMOTE) that has an impact for bettering the performance especially on minority classes. In the proposed CNN-ET-LSTM model, the CNN is used as a feature extractor and the long range dependencies in the ECG waveform are captured by the encoded transformer module. The LSTM layers are used to processes features sequentially to feed them to the fully connected layers for classification. Experimental results showed that the proposed system achieved an accuracy of 97.52%, precision of 97.80%, recall of 97.52% and F1-score of 97.62% with raw blind test data. The performance of our model is also compared to other existing methods that use the same dataset and found useful for clinical applications.
[...] Read more.Early detection of diseases and pests is vital for proper cultivation of in-demand, high-quality crops, such as cacao. With the emergence of recent trends in technology, artificial intelligence has made disease diagnosis and classification more convenient and non-invasive with the applications of image processing, machine learning, and neural networks. This study presented an alternative approach for developing a cacao pod disease classifier using hybrid machine learning models, integrating the capabilities of convolutional neural networks and support vector machines. Convolutional neural networks were employed to extract complex and high-level features from images, breaking the restrictions of conventional image processing techniques in capturing intricate patterns and details. Support vector machines, on the other hand, excel at differentiating between classes by effectively utilizing distinctive numerical parameters representing datasets with clear interclass differences. Raw images of cacao pods were utilized as inputs for extracting relevant parameters that will distinguish three distinct classes: healthy, black pod rot diseased, and pod borer infested. For visual feature extraction, four convolutional neural network architectures were considered: AlexNet, ResNet50, DenseNet201, and MobileNetV2. The outputs of the fully connected layers of the neural networks were used as references for training the support vector machine classifier with the consideration of linear, quadratic, cubic, and Gaussian kernel functions. Among every hybrid pair, the DenseNet201 – Cubic Kernel Support Vector Machine attained the highest testing accuracy of 98.4%. The model even outperformed two pre-existing systems focused on the same application, with corresponding accuracies of 91.79% and 94%, respectively. Thus, it posed an improved, non-invasive method for detecting black pod rot disease or pod borer infestation in cacao pods.
[...] Read more.This study presents a deep learning-based approach to automated resume and job matching that uses semantic similarity between texts. The solution is based on SimCSE RoBERTa transformer embeddings and a Siamese neural architecture trained using the MSELoss loss function. Unlike traditional filtering systems by keywords or characteristics, the proposed model learns to place semantically compatible pairs (resume-vacancy) in a common vector space. Unlike traditional keyword-based or attributive matching systems, our method is designed to capture deep semantic alignment between resumes and job descriptions. To evaluate the effectiveness of this architecture, we conducted extensive experiments on a labelled dataset of over 7,000 resume–vacancy pairs obtained from the HuggingFace repository. The dataset includes three classes (Good Fit, Potential Fit, No Fit), which we restructured into a binary classification task. Annotation labels reflect textual compatibility based on skills, responsibilities, and experience, ensuring task relevance.
It resulted in a moderately imbalanced dataset with approximately 66% positive and 34% negative examples. Labels were assigned based on semantic compatibility, including skill match, job responsibilities, and experience alignment. Our model achieved accuracy = 72%, precision = 70%, recall = 74%, F1-score = 72%, and Precision@10 = 75%, significantly outperforming both classical (TF-IDF + cosine similarity) and neural (Sentence-BERT without fine-tuning) baselines. These results validate the empirical effectiveness of our architecture for candidate ranking and selection. To justify the use of a complex Siamese architecture, the system was compared to two baselines: (1) a classical TF-IDF + cosine similarity method, and (2) a pretrained Sentence-BERT model without task-specific fine-tuning. The proposed model significantly outperformed both baselines across all evaluation metrics, confirming that its complexity translates to meaningful performance gains. A basic self-learning mechanism is implemented and functional. Recruiters can provide binary feedback (Fit / No Fit) for each recommended candidate, which is stored in a feedback table. This feedback can be used to retrain or fine-tune the model periodically, enabling adaptive behaviour over time. While initial retraining experiments were conducted offline, full automation and continuous integration of feedback into training pipelines remain a goal for future development. The system offers sub-5-second response times, integration with vector databases, and a web-based user interface. It is designed for use in HR departments, recruiting agencies, and employment platforms, with potential for broader commercial deployment and domain adaptation. We additionally implemented a feedback-driven retraining loop that enables future self-supervised adaptation. While UI and vector retrieval infrastructure were developed to support prototyping and deployment, the primary research innovation centres on the modelling framework, learning setup, and comparative evaluation methodology. This work contributes to the advancement of semantically-aware intelligent recruiting systems and offers a replicable baseline for future studies in neural recommendation for HR applications. The risks of algorithmic bias are emphasised separately: even in the absence of obvious demographic characteristics in the input data, the model can implicitly reproduce social or historical inequalities inherent in the data. In this regard, the study outlines areas for further development, in particular equity auditing, bias reduction techniques, and the integration of human validation in decision-making.