IJISA Vol. 11, No. 1, Jan. 2019
Cover page and Table of Contents: PDF (size: 184KB)
This paper suggests unconventional approach to system level self-diagnosis. Traditionally, system level self-diagnosis focuses on determining the state of the units which are tested by other system units. In contrast, the suggested approach utilizes the results of tests performed by a system unit to determine its own state. Such diagnosis is in many respects close to self-testing, since a unit evaluates its own state, which is inherent in self-testing. However, as distinct from self-testing, in the suggested approach a unit evaluates it on the basis of tests that it does not performs on itself, but on other system units. The paper considers diﬀerent diagnosis models with various testing assignments and diferent faulty assumptions including permanent and intermittent faults, and hybrid- fault situations. The diagnosis algorithm for identifying the unit’s state has been developed, and correctness of the algorithm has been verified by computer simulation experiments.[...] Read more.
Requirements prioritization is a most important activity to rank the requirements as per their priority of order .It is a crucial phase of requirement engineering in software development process. In this research introduced a MCDM model for requirements prioritization. To select a best supplier firm of washing machine three important criteria are used. In this proposed model investigation for requirements prioritization, a case study adopted from Ozcan et al using LOG FAHP (Logarithmic fuzzy analytic hierarchy process) and ANN (Artificial Neural Network) based model to choose the best supplier firm granting the highest client satisfaction among all technical aspects. The test was conducted on MATLAB software and result evaluated on fuzzy comparison matrix with three supplier selection criteria based on FAHP and LOGANFIS that shows the decision making outcome for requirements prioritization is better than existing approaches with higher priority.[...] Read more.
In this paper, we propose a new feature selection algorithm based on ensemble selection. In order to generate the library of models, each model is trained using just one feature. This means each model in the library represents a feature. Ensemble construction returns a well performing subset of features associated to the well performing subset of models. Our proposed approaches are evaluated using eight benchmark datasets. The results show the effectiveness of our ensemble selection approaches.[...] Read more.
The focus of Software Development Effort Estimation (SDEE) is to precisely predict the estimation of effort and time required for successfully developing a software project. From the past few years, data-intensive applications with a huge back-end part are contributing to the overall effort of projects. Therefore, it is becoming more important to add the back-end part to the SDEE process. This paper proposes an Evolutionary Learning (EL) based hybrid artificial neuron termed as dilation-erosion perceptron (DEP) framework from the mathematical morphology (MM) having its foundation in complete lattice theory (CLT) for solving the SDEE problem. In this work, we used the DEP (CMGA) model utilizing a chaotically modified genetic algorithm (CMGA) for the construction of DEP parameters. The proposed method uses the ER diagram artifacts such as aggregation, specialization, generalization, semantic integrity constraints, etc. for calculating the SDEE of back-end part of the business software. Furthermore, the proposed method was tested over two different datasets, one is existing and the other one is a self-developed dataset. The performance of the given method is then evaluated by three popular performance metrics, exhibiting better performance of the DEP (CMGA) model for solving the SDEE problems.[...] Read more.
With the increase in popularity of the Internet and the advancement of technology in the fields like bioinformatics and other scientific communities the amount of sequential data is on the increase at a tremendous rate. With this increase, it has become inevitable to mine useful information from this vast amount of data. The mined information can be used in various spheres; from day to day web activities like the prediction of next web pages, serving better advertisements, to biological areas like genomic data analysis etc. A rough set based clustering of sequential data was proposed by Kumar et al recently. They defined and used a measure, called Sequence and Set Similarity Measure to determine similarity in data. However, we have observed that this measure does not reflect some important characteristics of sequential data. As a result, in this paper, we used the fuzzy set technique to introduce a similarity measure, which we termed as Kernel and Set Similarity Measure to find the similarity of sequential data and generate overlapping clusters. For this purpose, we used exponential string kernels and Jaccard's similarity index. The new similarity measure takes an account of the order of items in the sequence as well as the content of the sequential pattern. In order to compare our algorithm with that of Kumar et al, we used the MSNBC data set from the UCI repository, which was also used in their paper. As far as our knowledge goes, this is the first fuzzy clustering algorithm for sequential data.[...] Read more.
This paper presents a hybrid learning machine for human identification. It is a merger of eigenface with fisherface method, genetic fuzzy clustering and complex neural network. The non-linear aggregation based summation and radial basis function neural networks (NLA-SRBF NNs) are proposed as one of the functional component of the novel learning machine. The architecture of NLA-SRBF NNs incorporates hidden neurons, with summation and radial basis aggregation, and output neurons with only summation aggregation, along with complex resilient propagation (ČRPROP) learning procedure. The improved learning and speedy convergence of NLA-SRBF NN enables the hybrid machine to provide better recognition accuracy. The learning machine consists of feature extraction, unsupervised clustering and supervised classification module. The aim of our proposal is to enhance the performance of biometric based recognition system. The efficacy and potency of our hybrid learning machine demonstrated on three benchmark biometric datasets-extended Cohn-Kanade, FERET and AR face datasets to comprehend the motivation. The performance comparisons of different variations of hidden neuron and learning algorithm thoroughly presented the superiority of the proposed NN based hybrid learning machine.[...] Read more.
In this paper, the dynamic behavior of the damping system is analyzed with a two-mass pendulum absorber, the equations of motion of non-linear mechanical systems are built accordingly. AFC equation systems have been identified in the non-linear formulation. To obtain the frequency response, the Ritz averaging method is used. A new numerical method of determining the parameters of optimal tuning two-mass pendulum absorber in the non-linear formulation has been Proposed and implemented.[...] Read more.
Cloud computing is the development of distributed computing, parallel computing, and grid computing, or defined as a commercial implementation of such computer science concepts. One of the main issues in a cloud computing environment is Task scheduling (TS). In Cloud task scheduling, many Non deterministic Polynomial time-hard optimization problem, and many meta-heuristic (MH) algorithms have been proposed to solve it. A task scheduler should adapt its scheduling strategy to changing environment and variable tasks. This paper amends a cloud task scheduling policy based on Modified Ant Colony Optimization (MACO) algorithm. The main contribution of recommended method is to minimize makespan and to perform Multi Objective Task Scheduling (MOTS) process by assigning pheromone amount relative to corresponding virtual machine efficiency. MACO algorithm improves the performance of task scheduling by reducing makespan and degree of imbalance comparatively lower than a basic ACO algorithm by its multi-objective and deliberate nature.[...] Read more.