International Journal of Mathematical Sciences and Computing (IJMSC)

ISSN: 2310-9025 (Print)

ISSN: 2310-9033 (Online)

DOI: https://doi.org/10.5815/ijmsc

Website: https://www.mecs-press.org/ijmsc

Published By: MECS Press

Frequency: 4 issues per year

Number(s) Available: 46

(IJMSC) in Google Scholar Citations / h5-index

IJMSC is committed to bridge the theory and practice of mathematical sciences and computing. IJMSC publishes original, peer-reviewed, and high quality articles in the areas of mathematical sciences and computing. IJMSC is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of mathematical sciences and computing applications.

 

IJMSC has been abstracted or indexed by several world class databases:Google Scholar, Microsoft Academic Search, CrossRef, CNKI, Baidu Wenku,  JournalTOCs, etc..

Latest Issue
Most Viewed
Most Downloaded

IJMSC Vol. 12, No. 1, Feb. 2026

REGULAR PAPERS

Multi-Dimensional Quantum Anharmonic Oscillators via Physics-Informed Transformer Networks: Extension to Non-Perturbative Regimes and Higher Dimensions

By Koffa D. Jude Ogunjobi Olakunle Odesanya Ituabhor Eghaghe S. Osas Ahmed-Ade Fatai Olorunleke I. Esther

DOI: https://doi.org/10.5815/ijmsc.2026.01.01, Pub. Date: 8 Feb. 2026

This study extends the one-dimensional anharmonic oscillators by implementing physics-informed transformer networks (PINNs) for multi-dimensional quantum systems. We develop a novel computational framework that combines transformer architecture with physics-informed neural networks to solve the Schrodinger equation for 2D and 3D anharmonic oscillators, addressing both perturbative and non-perturbative regimes. The methodology incorporates attention mechanisms to capture long-range quantum correlations, orthogonal loss functions for eigenfunction discovery, and adaptive training protocols for progressive dimensionality scaling. Our approach successfully computes eigenvalues and eigenfunctions for quartic anharmonic oscillators in multiple dimensions with coupling parameters ranging from weak (λ = 0.01) to strong (λ = 1000) regimes. Results demonstrate superior accuracy compared to traditional neural networks, with mean absolute errors below 10-6 for ground state energies and the successful capture of symmetry breaking in anisotropic systems. The transformer-based architecture requires 60% fewer trainable parameters than conventional feedforward networks while maintaining comparable accuracy. Applications to molecular vibrational systems and solid-state physics demonstrate the practical utility of this approach for realistic quantum mechanical problems beyond the scope of perturbative methods.

[...] Read more.
A Novel RSA Cryptosystem Variant Using Chaotic Exponent Selection and Ciphertext Blinding

By Arshi Fatima Namita Tiwari

DOI: https://doi.org/10.5815/ijmsc.2026.01.02, Pub. Date: 8 Feb. 2026

This paper presents an enhanced variant of the RSA cryptosystem that integrates chaotic exponent selection and ciphertext blinding to address security limitations inherent in classical RSA. Traditional RSA relies on a fixed public exponent, which generates predictable encryption patterns and increases exposure to exponent-based attacks. In the proposed scheme, the encryption exponent is dynamically derived from a logistic-map–based chaotic sequence, introducing high sensitivity to initial conditions and producing session-dependent exponent values. This chaotic exponentiation increases unpredictability without modifying the established RSA framework. Additionally, a ciphertext blinding factor is incorporated to prevent deterministic outputs and strengthen resistance against chosen-ciphertext and side-channel attacks. The paper outlines the mathematical background of the logistic map, details the complete encryption and decryption procedures, and demonstrates the correctness of the method through a numerical example using small primes. A theoretical security analysis shows that the combined effects of chaotic exponent selection and blinding significantly improve resistance to key-related attacks while maintaining compatibility with the original RSA structure. These enhancements offer a lightweight and practical improvement to RSA for environments requiring increased confidentiality and unpredictability in exponent selection.

[...] Read more.
A Simple Recourse Strategy for Efficient Allocation of Aircrafts for Satisfying Uncertain Passenger Demands

By Md. Mehedi Hasan Mohammad Babul Hasan Sujon Chandra Sutradhar

DOI: https://doi.org/10.5815/ijmsc.2026.01.03, Pub. Date: 8 Feb. 2026

This paper explores the use of stochastic optimization techniques to address the aircraft allocation problem under uncertain passenger demand. The proposed stochastic allocation model successfully meets the study’s objectives by demonstrating how uncertainty in passenger demand can be effectively incorporated into aircraft assignment decisions through a two-stage stochastic programming framework. Simulation results across multiple demand scenarios show that the model provides stable and adaptive allocations that minimize total cost while maintaining service quality, even under high variability. Incorporating the simple recourse approach enables post-decision flexibility, reducing penalties for unmet demand, and the use of Geometric Brownian Motion (GBM) offers a realistic representation of continuous demand fluctuations over time. These outcomes confirm the model’s practical value in bridging deterministic planning and real-time decision environments. While future research will focus on extending the model to a Markov Decision Process (MDP) framework and integrating real-time data streams, the current results establish a solid foundation by quantifying how uncertainty directly impacts fleet utilization, cost efficiency, and service reliability.

[...] Read more.
Ergodicity and the Emergence of Long-Term Balance in the Dynamical States of π

By Fethi Kadhi Moncef Ghazel Malek Ghazel

DOI: https://doi.org/10.5815/ijmsc.2026.01.04, Pub. Date: 8 Feb. 2026

This paper investigates the digits of π within a probabilistic framework based on Markov chains, proposing this model as a rigorous tool to support the conjecture of π’s  uniformity. Unlike simple frequency analyses, the Markov approach captures the dynamic structure of transitions between digits, allowing us to compute empirical stationary distributions that reveal how local irregularities evolve toward global equilibrium. This ergodic behavior provides quantitative, model based evidence that the digits of π tend toward fairness in the long run. Beyond its mathematical significance, this convergence toward uniformity invites a broader conceptual interpretation.

[...] Read more.
Design and Implementation of Intelligent Traffic Control Systems with Vehicular Ad Hoc Networks

By Osita Miracle Nwakeze Christopher Odeh Obaze Caleb Akachukwu

DOI: https://doi.org/10.5815/ijmsc.2026.01.05, Pub. Date: 8 Feb. 2026

Urban traffic congestion can be considered as a significant problem, and it contributes to long travel periods, fuel usage, and environmental influence. This paper introduces an Intelligent Traffic Control System (ITCS) that consists of Vehicular Ad Hoc Networks (VANETs) and Reinforcement Learning (RL) to optimise the control of traffic signals. The system facilitates real-time two-way communication between vehicles and roadside units, which means that an RL agent can control signal phases adaptively according to the traffic metrics like the average delay, the queue length, and traffic throughput. The Kaggle VANET Malicious Node Dataset was used to simulate malicious or unreliable nodes and test the robustness of the systems. The RL agent has been trained on the SUMO simulator trained on TraCI through various episodes and learns to take actions that increase traffic movement with a minimum amount of congestion. The results of training are progressive, as cumulative rewards grow, and average delays and queue length reduce with epochs. Performance evaluation of the ITCS under peak-hour, off-peak, incident, and malicious node scenarios demonstrated substantial gains over conventional fixed-time controllers, with average delays reduced by 48–55%, queue lengths by 49–57%, and throughput increased by 28–35%. These results indicate the success of the blend of reinforcement learning with VANET-supported traffic control, which is an adaptive, data-driven, and robust solution to an urban intersection. Not only the RL-based ITCS enhances traffic flow and congestion, but is also resistant to communication anomalies, which indicates its scalability to be deployed in the current smart city traffic management.

[...] Read more.
Performance Evaluation of Industrial and Commercial bank of China based on DuPont Analysis

By Qiaopeng Ma Xi Wang

DOI: https://doi.org/10.5815/ijmsc.2023.01.04, Pub. Date: 8 Feb. 2023

With the reform of Chinese economic system, the development of enterprises is facing many risks and challenges. In order to understand the state of operation of enterprises, it is necessary to apply relevant methods to evaluate the enterprise performance. Taking Industrial and Commercial Bank of China as an example, this paper selects its financial data from 2018 to 2021. Firstly, DuPont analysis is applied to decompose the return on equity into the product of profit margin on sales, total assets turnover ratio and equity multiplier. Then analyzes the effect of the changes of these three factors on the return on equity respectively by using the Chain substitution method. The results show that the effect of profit margin on sales on return on equity decreases year by year and tends to be positive from negative. The effect of total assets turnover ratio on return on equity changes from positive to negative and then to positive, while the effect of equity multiplier is opposite. These results provide a direction for the adjustment of the return on equity of Industrial and Commercial Bank of China. Finally, according to the results, some suggestions are put forward for the development of Industrial and Commercial Bank of China.

[...] Read more.
Evaluating the impact of Test-Driven Development on Software Quality Enhancement

By Md. Sydur Rahman Aditya Kumar Saha Uma Chakraborty Humaira Tabassum Sujana S. M. Abdullah Shafi

DOI: https://doi.org/10.5815/ijmsc.2024.03.05, Pub. Date: 8 Sep. 2024

In the software development industry, ensuring software quality holds immense significance due to its direct influence on user satisfaction, system reliability, and overall end-users. Traditionally, the development process involved identifying and rectifying defects after the implementation phase, which could be time-consuming and costly. Determining software development methodologies, with a specific emphasis on Test-Driven Development, aims to evaluate its effectiveness in improving software quality. The study employs a mixed-methods approach, combining quantitative surveys and qualitative interviews to comprehensively investigate the impact of Test-Driven Development on various facets of software quality. The survey findings unveil that Test-Driven Development offers substantial benefits in terms of early defect detection, leading to reduced costs and effort in rectifying issues during the development process. Moreover, Test-Driven Development encourages improved code design and maintainability, fostering the creation of modular and loosely coupled code structures. These results underscore the pivotal role of Test-Driven Development in elevating code quality and maintainability. Comparative analysis with traditional development methodologies highlights Test-Driven Development's effectiveness in enhancing software quality, as rated highly by respondents. Furthermore, it clarifies Test-Driven Development's positive impact on user satisfaction, overall product quality, and code maintainability. Challenges related to Test-Driven Development adoption are identified, such as the initial time investment in writing tests and difficulties adapting to changing requirements. Strategies to mitigate these challenges are proposed, contributing to the practical application of Test-Driven Development. Offers valuable insights into the efficacy of Test-Driven Development in enhancing software quality. It not only highlights the benefits of Test-Driven Development but also provides a framework for addressing challenges and optimizing its utilization. This knowledge is invaluable for software development teams, project managers, and quality assurance professionals, facilitating informed decisions regarding adopting and implementing Test-Driven Development as a quality assurance technique in software development.

[...] Read more.
Blockchain: A Comparative Study of Consensus Algorithms PoW, PoS, PoA, PoV

By Shahriar Fahim SM Katibur Rahman Sharfuddin Mahmood

DOI: https://doi.org/10.5815/ijmsc.2023.03.04, Pub. Date: 8 Aug. 2023

Since the inception of Blockchain, the computer database has been evolving into innovative technologies. Recent technologies emerge, the use of Blockchain is also flourishing. All the technologies from Blockchain use a mutual algorithm to operate. The consensus algorithm is the process that assures mutual agreements and stores information in the decentralized database of the network. Blockchain’s biggest drawback is the exposure to scalability. However, using the correct consensus for the relevant work can ensure efficiency in data storage, transaction finality, and data integrity. In this paper, a comparison study has been made among the following consensus algorithms: Proof of Work (PoW), Proof of Stake (PoS), Proof of Authority (PoA), and Proof of Vote (PoV). This study aims to provide readers with elementary knowledge about blockchain, more specifically its consensus protocols. It covers their origins, how they operate, and their strengths and weaknesses. We have made a significant study of these consensus protocols and uncovered some of their advantages and disadvantages in relation to characteristics details such as security, energy efficiency, scalability, and IoT (Internet of Things) compatibility. This information will assist future researchers to understand the characteristics of our selected consensus algorithms.

[...] Read more.
A Decision-Making Technique for Software Architecture Design

By Jubayer Ahamed Dip Nandi

DOI: https://doi.org/10.5815/ijmsc.2023.04.05, Pub. Date: 8 Dec. 2023

The process of making decisions on software architecture is the greatest significance for the achievement of a software system's success. Software architecture establishes the framework of the system, specifies its characteristics, and has significant and major effects across the whole life cycle of the system. The complicated characteristics of the software development context and the significance of the problem have caused the research community to build various methodologies focused on supporting software architects to improve their decision-making abilities. With these efforts, the implementation of such systematic methodologies looks to be somewhat constrained in practical application. Moreover, the decision-makers must overcome unexpected difficulties due to the varying software development processes that propose distinct approaches for architecture design. The understanding of these design approaches helps to develop the architectural design framework. In the area of software architecture, a significant change has occurred wherein the focus has shifted from primarily identifying the result of the architecting process, which was primarily expressed through the representation of components and connectors, to the documentation of architectural design decisions and the underlying reasoning behind them. This shift finally concludes in the creation of an architectural design framework. So, a correct decision- making approach is needed to design the software architecture. The present study analyzes the design decisions and proposes a new design decision model for the software architecture. This study introduces a new approach to the decision-making model, wherein software architecture design is viewed based on specific decisions.

[...] Read more.
Green Computing: An Era of Energy Saving Computing of Cloud Resources

By Shailesh Saxena Mohammad Zubair Khan Ravendra Singh

DOI: https://doi.org/10.5815/ijmsc.2021.02.05, Pub. Date: 8 Jun. 2021

Cloud computing is a widely acceptable computing environment, and its services are also widely available. But the consumption of energy is one of the major issues of cloud computing as a green computing. Because many electronic resources like processing devices, storage devices in both client and server site and network computing devices like switches, routers are the main elements of energy consumption in cloud and during computation power are also required to cool the IT load in cloud computing. So due to the high consumption, cloud resources define the high energy cost during the service activities of cloud computing and contribute more carbon emissions to the atmosphere. These two issues inspired the cloud companies to develop such renewable cloud sustainability regulations to control the energy cost and the rate of CO2 emission. The main purpose of this paper is to develop a green computing environment through saving the energy of cloud resources using the specific approach of identifying the requirement of computing resources during the computation of cloud services. Only required computing resources remain ON (working state), and the rest become OFF (sleep/hibernate state) to reduce the energy uses in the cloud data centers. This approach will be more efficient than other available approaches based on cloud service scheduling or migration and virtualization of services in the cloud network. It reduces the cloud data center's energy usages by applying a power management scheme (ON/OFF) on computing resources. The proposed approach helps to convert the cloud computing in green computing through identifying an appropriate number of cloud computing resources like processing nodes, servers, disks and switches/routers during any service computation on cloud to handle the energy-saving or environmental impact. 

[...] Read more.
A Review of Quantum Computing

By Arebu Dejen Murad Ridwan

DOI: https://doi.org/10.5815/ijmsc.2022.04.05, Pub. Date: 8 Oct. 2022

Quantum computing is a computational framework based on the Quantum Mechanism, which has gotten a lot of attention in the past few decades. In comparison to traditional computers, it has achieved amazing performance on several specialized tasks. Quantum computing is the study of quantum computers that use quantum mechanics phenomena such as entanglement, superposition, annealing, and tunneling to solve problems that humans cannot solve in their lifetime. This article offers a brief outline of what is happening in the field of quantum computing, as well as the current state of the art. It also summarizes the features of quantum computing in terms of major elements such as qubit computation, quantum parallelism, and reverse computing. The study investigates the cause of a quantum computer's great computing capabilities by utilizing quantum entangled states. It also emphasizes that quantum computer research requires a combination of the most sophisticated sciences, such as computer technology, micro-physics, and advanced mathematics.

[...] Read more.
Comparison of Fog Computing & Cloud Computing

By Vishal Kumar Asif Ali Laghari Shahid Karim Muhammad Shakir Ali Anwar Brohi

DOI: https://doi.org/10.5815/ijmsc.2019.01.03, Pub. Date: 8 Jan. 2019

Fog computing is extending cloud computing by transferring computation on the edge of networks such as mobile collaborative devices or fixed nodes with built-in data storage, computing, and communication devices. Fog gives focal points of enhanced proficiency, better security, organize data transfer capacity sparing and versatility. With a specific end goal to give imperative subtle elements of Fog registering, we propose attributes of this region and separate from cloud computing research. Cloud computing is developing innovation which gives figuring assets to a specific assignment on pay per utilize. Cloud computing gives benefit three unique models and the cloud gives shoddy; midway oversaw assets for dependable registering for performing required errands. This paper gives correlation and attributes both Fog and cloud computing differs by outline, arrangement, administrations and devices for associations and clients. This comparison shows that Fog provides more flexible infrastructure and better service of data processing by consuming low network bandwidth instead of shifting whole data to the cloud.

[...] Read more.
A LSB Based Image Steganography Using Random Pixel and Bit Selection for High Payload

By U. A. Md. Ehsan Ali Emran Ali Md. Sohrawordi Md. Nahid Sultan

DOI: https://doi.org/10.5815/ijmsc.2021.03.03, Pub. Date: 8 Aug. 2021

Security in digital communication is becoming more important as the number of systems is connected to the internet day by day. It is necessary to protect secret message during transmission over insecure channels of the internet. Thus, data security becomes an important research issue. Steganography is a technique that embeds secret information into a carrier such as images, audio files, text files, and video files so that it cannot be observed.  In this paper, based on spatial domain, a new image steganography method is proposed to ensure the privacy of the digital data during transmission over the internet. In this method, least significant bit substitution is proposed where the information embedded in the random bit position of a random pixel location of the cover image using Pseudo Random Number Generator (PRNG). The proposed method used a 3-3-2 approach to hide a byte in a pixel of a 24 bit color image. The method uses Pseudo Random Number Generator (PRNG) in two different stages of embedding process. The first one is used to select random pixels and the second PRNG is used select random bit position into the R, G and B values of a pixel to embed one byte of information. Due to this randomization, the security of the system is expected to increase and the method achieves a very high maximum hiding capacity which signifies the importance of the proposed method.

[...] Read more.
Some Measures of Picture Fuzzy Sets and Their Application in Multi-attribute Decision Making

By Nguyen Van Dinh Nguyen Xuan Thao

DOI: https://doi.org/10.5815/ijmsc.2018.03.03, Pub. Date: 8 Jul. 2018

To measure the difference of two fuzzy sets / intuitionistic sets, we can use the distance measure and dissimilarity measure between fuzzy sets. Characterization of distance/dissimilarity measure between fuzzy sets/intuitionistic fuzzy set is important as it has application in different areas: pattern recognition, image segmentation, and decision making. Picture fuzzy set (PFS) is a generalization of fuzzy set and intuitionistic set, so that it have many application. In this paper, we introduce concepts: difference between PFS-sets, distance measure and dissimilarity measure between picture fuzzy sets, and also provide the formulas for determining these values. We also present an application of dissimilarity measures in multi-attribute decision making.

[...] Read more.
Cryptographic Security using Various Encryption and Decryption Method

By Ritu Goyal Mehak Khurana

DOI: https://doi.org/10.5815/ijmsc.2017.03.01, Pub. Date: 8 Jul. 2017

Fast development in universal computing and the growth in radio/wireless and mobile strategies have led to the extended use of application space for Radio Frequency (RFID), wireless sensors, Internet of things (IoT). There are numerous applications that are safe and privacy sensitive. The increase of the new equipments has permitted intellectual methods of linking physical strategies and the computing worlds through numerous network interfaces. Consequently, it is compulsory to take note of the essential risks subsequent from these communications. In Wireless systems, RFID and sensor linkages are extremely organized in soldierly, profitable and locomotive submissions. With the extensive use of the wireless and mobile devices, safety has therefore become a major concern. As a consequence, need for extremely protected encryption and decryption primitives in such devices is very important than before.

[...] Read more.
Performance Evaluation of Industrial and Commercial bank of China based on DuPont Analysis

By Qiaopeng Ma Xi Wang

DOI: https://doi.org/10.5815/ijmsc.2023.01.04, Pub. Date: 8 Feb. 2023

With the reform of Chinese economic system, the development of enterprises is facing many risks and challenges. In order to understand the state of operation of enterprises, it is necessary to apply relevant methods to evaluate the enterprise performance. Taking Industrial and Commercial Bank of China as an example, this paper selects its financial data from 2018 to 2021. Firstly, DuPont analysis is applied to decompose the return on equity into the product of profit margin on sales, total assets turnover ratio and equity multiplier. Then analyzes the effect of the changes of these three factors on the return on equity respectively by using the Chain substitution method. The results show that the effect of profit margin on sales on return on equity decreases year by year and tends to be positive from negative. The effect of total assets turnover ratio on return on equity changes from positive to negative and then to positive, while the effect of equity multiplier is opposite. These results provide a direction for the adjustment of the return on equity of Industrial and Commercial Bank of China. Finally, according to the results, some suggestions are put forward for the development of Industrial and Commercial Bank of China.

[...] Read more.
Concepts of Bezier Polynomials and its Application in Odd Higher Order Non-linear Boundary Value Problems by Galerkin WRM

By Nazrul Islam

DOI: https://doi.org/10.5815/ijmsc.2021.01.02, Pub. Date: 8 Feb. 2021

Many different methods are applied and used in an attempt to solve higher order nonlinear boundary value problems (BVPs). Galerkin weighted residual method (GWRM) are widely used to solve BVPs. The main aim of this paper is to find the approximate solutions of fifth, seventh and ninth order nonlinear boundary value problems using GWRM. A trial function namely, Bezier Polynomials is assumed which is made to satisfy the given essential boundary conditions. Investigate the effectiveness of the current method; some numerical examples were considered. The results are depicted both graphically and numerically. The numerical solutions are in good agreement with the exact result and get a higher accuracy in the solutions. The present method is quit efficient and yields better results when compared with the existing methods. All problems are performed using the software MATLAB R2017a.

[...] Read more.
A Review of Quantum Computing

By Arebu Dejen Murad Ridwan

DOI: https://doi.org/10.5815/ijmsc.2022.04.05, Pub. Date: 8 Oct. 2022

Quantum computing is a computational framework based on the Quantum Mechanism, which has gotten a lot of attention in the past few decades. In comparison to traditional computers, it has achieved amazing performance on several specialized tasks. Quantum computing is the study of quantum computers that use quantum mechanics phenomena such as entanglement, superposition, annealing, and tunneling to solve problems that humans cannot solve in their lifetime. This article offers a brief outline of what is happening in the field of quantum computing, as well as the current state of the art. It also summarizes the features of quantum computing in terms of major elements such as qubit computation, quantum parallelism, and reverse computing. The study investigates the cause of a quantum computer's great computing capabilities by utilizing quantum entangled states. It also emphasizes that quantum computer research requires a combination of the most sophisticated sciences, such as computer technology, micro-physics, and advanced mathematics.

[...] Read more.
Blockchain: A Comparative Study of Consensus Algorithms PoW, PoS, PoA, PoV

By Shahriar Fahim SM Katibur Rahman Sharfuddin Mahmood

DOI: https://doi.org/10.5815/ijmsc.2023.03.04, Pub. Date: 8 Aug. 2023

Since the inception of Blockchain, the computer database has been evolving into innovative technologies. Recent technologies emerge, the use of Blockchain is also flourishing. All the technologies from Blockchain use a mutual algorithm to operate. The consensus algorithm is the process that assures mutual agreements and stores information in the decentralized database of the network. Blockchain’s biggest drawback is the exposure to scalability. However, using the correct consensus for the relevant work can ensure efficiency in data storage, transaction finality, and data integrity. In this paper, a comparison study has been made among the following consensus algorithms: Proof of Work (PoW), Proof of Stake (PoS), Proof of Authority (PoA), and Proof of Vote (PoV). This study aims to provide readers with elementary knowledge about blockchain, more specifically its consensus protocols. It covers their origins, how they operate, and their strengths and weaknesses. We have made a significant study of these consensus protocols and uncovered some of their advantages and disadvantages in relation to characteristics details such as security, energy efficiency, scalability, and IoT (Internet of Things) compatibility. This information will assist future researchers to understand the characteristics of our selected consensus algorithms.

[...] Read more.
Comparison on Trapezoidal and Simpson’s Rule for Unequal Data Space

By Md. Nayan Dhali Mohammad Farhad Bulbul Umme Sadiya

DOI: https://doi.org/10.5815/ijmsc.2019.04.04, Pub. Date: 8 Nov. 2019

Numerical integration compromises a broad family of algorithm for calculating the numerical value of a definite integral. Since some of the integration cannot be solved analytically, numerical integration is the most popular way to obtain the solution. Many different methods are applied and used in an attempt to solve numerical integration for unequal data space. Trapezoidal and Simpson’s rule are widely used to solve numerical integration problems. Our paper mainly concentrates on identifying the method which provides more accurate result. In order to accomplish the exactness we use some numerical examples and find their solutions. Then we compare them with the analytical result and calculate their corresponding error. The minimum error represents the best method. The numerical solutions are in good agreement with the exact result and get a higher accuracy in the solutions.

[...] Read more.
A LSB Based Image Steganography Using Random Pixel and Bit Selection for High Payload

By U. A. Md. Ehsan Ali Emran Ali Md. Sohrawordi Md. Nahid Sultan

DOI: https://doi.org/10.5815/ijmsc.2021.03.03, Pub. Date: 8 Aug. 2021

Security in digital communication is becoming more important as the number of systems is connected to the internet day by day. It is necessary to protect secret message during transmission over insecure channels of the internet. Thus, data security becomes an important research issue. Steganography is a technique that embeds secret information into a carrier such as images, audio files, text files, and video files so that it cannot be observed.  In this paper, based on spatial domain, a new image steganography method is proposed to ensure the privacy of the digital data during transmission over the internet. In this method, least significant bit substitution is proposed where the information embedded in the random bit position of a random pixel location of the cover image using Pseudo Random Number Generator (PRNG). The proposed method used a 3-3-2 approach to hide a byte in a pixel of a 24 bit color image. The method uses Pseudo Random Number Generator (PRNG) in two different stages of embedding process. The first one is used to select random pixels and the second PRNG is used select random bit position into the R, G and B values of a pixel to embed one byte of information. Due to this randomization, the security of the system is expected to increase and the method achieves a very high maximum hiding capacity which signifies the importance of the proposed method.

[...] Read more.
Predictive Analytics of Employee Attrition using K-Fold Methodologies

By V. Kakulapati Shaik Subhani

DOI: https://doi.org/10.5815/ijmsc.2023.01.03, Pub. Date: 8 Feb. 2023

Currently, every company is concerned about the retention of their staff. They are nevertheless unable to recognize the genuine reasons for their job resignations due to various circumstances. Each business has its approach to treating employees and ensuring their pleasure. As a result, many employees abruptly terminate their employment for no apparent reason. Machine learning (ML) approaches have grown in popularity among researchers in recent decades. It is capable of proposing answers to a wide range of issues. Then, using machine learning, you may generate predictions about staff attrition. In this research, distinct methods are compared to identify which workers are most likely to leave their organization. It uses two approaches to divide the dataset into train and test data: the 70 percent train, the 30 percent test split, and the K-Fold approaches. Cat Boost, LightGBM Boost, and XGBoost are three methods employed for accuracy comparison. These three approaches are accurately generated by using Gradient Boosting Algorithms.

[...] Read more.
An Application of the Two-Factor Mixed Model Design in Educational Research

By O.A NUGA

DOI: https://doi.org/10.5815/ijmsc.2019.04.03, Pub. Date: 8 Nov. 2019

As with any ANOVA, a repeated measure ANOVA tests the equality of means. However, a repeated measure ANOVA is used when all members of a random sample are measured under a number of different conditions. As the sample is exposed to each condition in turn, the measurement of the dependent variable is repeated. Using a standard ANOVA in this case is not appropriate because it fails to model the correlation between the repeated measures: the data violate the ANOVA assumption of independence. Some ANOVA designs combine repeated measures factors and independent group factors. These types of designs are called mixed-model ANOVA and they have a split plot structure since they involve a mixture of one between-groups factor and one within-subjects factor.

   The work present an application of the mixed model factorial ANOVA, using scores obtained by 120 secondary school students in mathematics. The between group factor is the different categories of students (science, commercial humanities) with three levels while the within group factor is the three years spent in senior secondary School.

[...] Read more.
An Individualized Face Pairing Model for Age-Invariant Face Recognition

By Joseph Damilola Akinyemi Olufade F. W. Onifade

DOI: https://doi.org/10.5815/ijmsc.2023.01.01, Pub. Date: 8 Feb. 2023

Among other factors affecting face recognition and verification, the aging of individuals is a particularly challenging one. Unlike other factors such as pose, expression, and illumination, aging is uncontrollable, personalized, and takes place throughout human life. Thus, while the effects of factors such as head pose, illumination, and facial expression on face recognition can be minimized by using images from controlled environments, the effect of aging cannot be so controlled. This work exploits the personalized nature of aging to reduce the effect of aging on face recognition so that an individual can be correctly recognized across his/her different age-separated face images. To achieve this, an individualized face pairing method was developed in this work to pair faces against entire sets of faces grouped by individuals then, similarity score vectors are obtained for both matching and non-matching image-individual pairs, and the vectors are then used for age-invariant face recognition. This model has the advantage of being able to capture all possible face matchings (intra-class and inter-class) within a face dataset without having to compute all possible image-to-image pairs. This reduces the computational demand of the model without compromising the impact of the ageing factor on the identity of the human face. The developed model was evaluated on the publicly available FG-NET dataset, two subsets of the CACD dataset, and a locally obtained FAGE dataset using leave-one-person (LOPO) cross-validation. The model achieved recognition accuracies of 97.01%, 99.89%, 99.92%, and 99.53% respectively. The developed model can be used to improve face recognition models by making them robust to age-variations in individuals in the dataset.

[...] Read more.
An Improved Security Schematic based on Coordinate Transformation

By Awnon Bhowmik Mahmudul Hasan

DOI: https://doi.org/10.5815/ijmsc.2023.02.01, Pub. Date: 8 May 2023

An earlier research project that dealt with converting ASCII codes into 2D Cartesian coordinates and then applying translation and rotation transformations to construct an encryption system, is improved by this study. Here, we present a variation of the Cantor Pairing Function to convert ASCII values into distinctive 2D Coordinates. Then, we apply some novel methods to jumble the ciphertext generated as a result of the transformations. We suggest numerous improvements to the earlier research via simple tweaks in the existing code and by introducing a novel key generation protocol that generates an infinite integral key space with no decryption failures. The only way to break this protocol with no prior information would be brute force attack. With the help of elementary combinatorics and probability topics, we prove that this encryption protocol is seemingly infeasible to overcome by an unwelcome adversary.

[...] Read more.