IJIEEB Vol. 17, No. 6, Dec. 2025
Cover page and Table of Contents: PDF (size: 747KB)
REGULAR PAPERS
Income inequality is a persistent issue in both developed and developing economies, influenced by complex socio-economic factors such as education, occupation, and gender. This study addresses a critical gap by applying advanced machine learning techniques to analyze the socio-economic determinants of income in Bangladesh and global contexts. The primary objectives were to identify the most influential factors affecting income and assess the effectiveness of various machine learning models in predicting income levels. Using datasets from Bangladesh and global sources, this study employed Random Forest, Gradient Boosting, Logistic Regression, and Support Vector Machines to predict income and assess feature importance. Key findings showed that education, occupation, gender and hours worked per week were the most significant predictors of income. The Bangladeshi dataset highlighted limited access to higher education and pronounced gender disparities, while the global dataset reflected gender pay gaps and more equitable educational access. Random Forest Classifier appeared as the most effective model, achieving 100% accuracy in Bangladesh and 96% accuracy globally. These findings underscore the need for targeted policies to improve educational access, promote vocational training, and address gender inequality to reduce income disparities. Additionally, the study demonstrates the potential of machine learning to uncover non-linear relationships in socio-economic data, providing valuable insights for evidence-based policymaking. This research highlights the importance of integrating advanced data-driven methods to address the socio-economic drivers of income inequality and promote inclusive economic growth.
[...] Read more.Effective information technology governance is essential for improving public service delivery and administrative efficiency at the village government level. This research focuses on Simpang Pasir Village in Palaran District, Samarinda City, East Kalimantan Province, aiming to establish a foundation for an electronic-based government system (e-government). By employing the TOGAF Architecture Development Method (ADM) version 9.2, this study systematically designs an enterprise architecture encompassing business, data, application, and technology domains. The process spans from the preliminary phase to migration planning, with gap analysis conducted to align baseline and target architectures. Key outputs include the development of integrated systems for administrative tasks and digital public services, supported by cloud server technology to ensure scalability and efficiency. Validation of the design using the Enterprise Architecture Scorecard yielded a score of 82.27%, indicating strong alignment with Simpang Pasir Village's objectives and readiness for implementation. This initiative addresses critical challenges, including data integration, transparent governance, and improved public services. The research outcomes provide a comprehensive roadmap for transitioning to e-government, supporting the village's mission to advance IT-based governance while fostering self-reliance and community empowerment. The findings contribute valuable insights for digitally transforming rural governments, positioning Simpang Pasir Village as a model for innovation and modernization.
[...] Read more.The E-commerce platform has provided the user and the organization with a new avenue for the product distribution and selling. The product distribution is greatly hampered by the opinions provided by the end user and if tampering and fake reviews are generated then it affects the product badly. The Natural language processing domain deals with the analysis of this review and provide the user with recommendation for decision making. The NLP domain deals with several issues like fake reviews, tampering with the reviews, and security for transfer of reviews etc. In this paper, a Blockchain based sentimental analysis module framework is proposed that provides the user with a secure and trustful environment for the opinions reviews as well as it provide a hybrid sentimental module that uses the algorithms from machine learning and deep learning for sentiment score generation. The Proposed Model was evaluated on different datasets of the varied domain. The proposed model performs a substantial improvement in providing the accurate results.
[...] Read more.This paper proposes DFI-ADR (Dynamic Fuzzy Information System with Agriculture Decision Retrieval) aimed at improving agricultural decision-making through case-based reasoning and precise information retrieval. This approach uses fuzzy logic and machine learning techniques, such as IndRNN, to compute similarity scores between historical agricultural cases and new queries. This enables dynamic classification of cases as "distinct," "similar," or "highly comparable" based on fuzzy membership values. These values significantly enhance the accuracy of decisions related to agricultural factors like crop yield, soil quality, and irrigation. The methodology outperforms traditional methods in terms of accuracy, recall, and precision, proving highly effective for agricultural analysis and decision-making. In experiments with the Agriculture Dataset Karnataka, DFI-ADR achieved an accuracy of 95%, a precision of 100%, and an F1-score of 94.74%, significantly outperforming traditional methods by a margin of 10-15% across these metrics. These values demonstrate its effectiveness for agricultural analysis and decision-making.
[...] Read more.Machine learning models that lack transparency can lead to biased conclusions and decisions in automated systems in various domains. To address this issue, explainable AI (XAI) frameworks such as Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) have evolved by offering interpretable insights into machine learning model decisions. A thorough comparison of LIME and SHAP applied to a Random Forest model trained on a loan dataset resulted in an Accuracy of 85%, Precision of 84%, Recall of 97%, and an F1 score of 90%, is presented in this study. This study's primary contributions are as follows: (1) using Shapley values, which represent the contribution of each feature, to show that SHAP provides deeper and more reliable feature attributions than LIME; (2) demonstrating that LIME lacks the sophisticated interpretability of SHAP, despite offering faster and more generalizable explanations across various model types; (3) quantitatively comparing computational efficiency, where LIME displays a faster runtime of 0.1486 seconds using 9.14MB of memory compared to SHAP with a computational time of 0.3784 seconds using memory 1.2 MB. By highlighting the trade-offs between LIME and SHAP in terms of interpretability, computational complexity, and application to various computer systems, this study contributes to the field of XAI. The outcome helps stakeholders better understand and trust AI-driven loan choices, which advances the development of transparent and responsible AI systems in finance.
[...] Read more.This research tackles the fundamental requirement of synthetic data generation to tighten up machine learning model precision and one-shot shot learning to lessen the need to pursue data input. The project aims to develop a service that can generate reasonable synthetic data from a given dataset. It was first designed and developed, and then the project structure was set, and libraries were chosen for predevelopment analysis. This continued development process also included subsequent phases that included dataset collecting, assessment, and iterative research. Different hyperparameters were run over multiple models to select an optimal configuration. To evaluate the model's performance over produced synthetic datasets, about 1.5 and 2 times the original, synthetic data was produced, providing a basis for a robust synthetic data generating process.
[...] Read more.This qualitative study explores human resource managers' perceptions of blockchain technology adoption within the automobile sector in Punjab, India. Based on semi-structured interviews with HR managers from 52 organizations, the research uncovers critical insights into blockchain's potential benefits and challenges. The findings reveal that 80% of participants recognize blockchain’s ability to enhance data security, improve operational transparency, and streamline processes such as recruitment and supply chain management. For instance, blockchain’s ability to automate credential verification is perceived to reduce recruitment time by up to 30%. However, significant barriers impede adoption. Approximately 70% of HR managers cited technical complexity and a lack of in-house expertise as primary challenges, while 60% expressed concerns about high implementation costs and the absence of a clear regulatory framework. Furthermore, 50% highlighted resistance to change among employees as a critical obstacle. The study emphasizes the importance of targeted training programs to address skill gaps, strategic planning to manage high costs, and effective change management strategies to reduce resistance. These findings underscore the transformative potential of blockchain technology to improve HR efficiency and organizational performance while highlighting the need for addressing adoption barriers to unlock its full benefits. This research provides actionable insights for the automobile industry, contributing to academic discourse and offering a roadmap for blockchain integration into HR practices.
[...] Read more.The introduction of cloud technology changes the face of data management by eliminating tedious concerns with regards to proper storage and accessibility as it can done from any location, therefore, it can be said that the emergence of this technology came with a number of challenges related to data confidentiality, integrity, and authentication as well. As a resolution to certain weaknesses presented in this case, the authors in this paper suggest a hybrid security model which integrates both quantum cryptography and blockchain technology, and improves security flaws on cloud and quantum models. There are three characteristics of data that are crucial to its safety and security; confidentiality, integrity, and availability, and cloud technology has been known to be accompanied with plenty of challenges concerning these aspects, however, with the use of Blockchain technology, data becomes immutable, decentralized, and transparent thereby reducing the risk of unauthorized access. The combination of strategies proposed in this paper, helps to eliminate a number of drawbacks like key loss, data loss, and man in the middle attacks that are common in cloud infrastructure. This study shows the structural design, data transmission and processes of the architecture for the hybrid model, looking forward to achieve better data security. The analysis of the model suggests its advantages over conventional encryption model and a purely constructed model of blockchain. Performance benchmarks are also included, demonstrating that the model is resilient to cyber threats during the quantum age. The architecture is seamless with the current cloud stature, takes cloud security a notch higher by solving the considerable challenges and is poised to be deployed on a larger scale The directional works will include improving the system’s computational efficiency and extending the model to multiple cloud infrastructures to achieve higher security in today’s complex cloud systems.
[...] Read more.Federated Learning (FL) enables collaborative model training across distributed clients without sharing raw data, but it remains vulnerable to privacy risks. This study introduces FL-ODP-DFT, a novel framework that integrates Optimal Differential Privacy (ODP) with Discrete Fourier Transform (DFT) to enhance both model performance and privacy. By transforming local gradients into the frequency domain, the method reduces data size and adds a layer of encryption before transmission. Adaptive Gaussian Clipping (AGC) is employed to dynamically adjust clipping thresholds based on gradient distribution, further improving gradient handling. ODP then calibrates noise addition based on data sensitivity and privacy budgets, ensuring a balance between privacy and accuracy. Extensive experiments demonstrate that FL-ODP-DFT outperforms existing techniques in terms of accuracy, computational efficiency, convergence speed, and privacy protection, making it a robust and scalable solution for privacy-preserving FL.
[...] Read more.Colon cancer remains a significant global health challenge, contributing to high morbidity and mortality rates. Accurate diagnosis through histological analysis is critical for effective treatment and improved patient outcomes. In this study, we present ColoNet, a convolutional neural network (CNN)-based system designed to enhance the early detection and classification of colon adenocarcinoma using LC25000 dataset comprising 10,000 digital histopathology images. Unlike conventional CNN-based models, ColoNet integrates an optimized feature extraction strategy with deeper convolutional layers, and dropout regularization, leading to improved generalization and reduced overfitting. Additionally, the proposed model achieves faster convergence and superior classification performance compared to existing methods. The system addresses the unique challenges in distinguishing benign from malignant conditions, automating the diagnostic process and streamlining colon cancer assessments for pathologists. ColoNet was rigorously evaluated across key performance metrics, including recall, accuracy, precision, and F1-score, achieving a maximum accuracy of 96.66%. This surpasses several state-of-the-art CNN models in colon cancer classification, demonstrating its effectiveness. Its high accuracy and robust classification capabilities make it a reliable tool for identifying different colon cancer stages. By providing an efficient and automated solution for pathologists, ColoNet is expected to significantly enhance colon cancer diagnosis, supporting early detection and staging, ultimately leading to better treatment outcomes and reduced cancer-related mortality. This research underscores the importance of AI-driven systems in transforming the landscape of digital pathology and improving clinical decision-making for colon cancer.
[...] Read more.