IJISA Vol. 11, No. 2, Feb. 2019
Cover page and Table of Contents: PDF (size: 438KB)
The paper presents the results of the research concerning the development of the hybrid model of 1-D signal adaptive filter based on the complex use of both the empirical mode decomposition and the wavelet analysis. Implementation of the proposed model involves three stages. Firstly, the initial signal is decomposed to the empirical modes by the Huang transform with allocation the components, which contain the noise. Then the wavelet filtering is performed to remove the noise component. The optimal parameters of the wavelet filter are determined based on the minimal value of ratio of Shannon entropy for the filtered data and the allocated noise component and these parameters are determined depending on type of the studied component of the signal. Finally, the signal is reconstructed with the use of the processed modes. The results of the simulation with the use of the test data have shown higher effectiveness of the proposed method in comparison with standard method of the signal denoising based on wavelet analysis.[...] Read more.
The 0/1 Knapsack (KP) is a combinatorial optimization problem that can be solved using various optimization algorithms. Ant Colony System (ACS) is one of these algorithms that is operated iteratively and converged emphatically to a matured solution. The convergence of the ACS depends mainly on the heuristic patterns that are used to update the pheromone trails throughout the optimization cycles. Although, ACS has significant advantages, it suffers from a slow convergence, as the pheromones, which are used to initiate the searching process are initialized randomly at the beginning. In this paper, a new heuristic pattern is proposed to speed up the convergence of ACS with 0/1 KP. The proposed heuristic enforces an order-critical item selection. As such, the proposed heuristic depends on considering the profit added by each item, as similar to the existing heuristics, besides the order of item selection. Accordingly, the proposed heuristic allows the items that are added at the end to get more value in order to be considered in the beginning of the next round. As such, with each cycle, the selected items are varied substantially and the pheromones are vastly updated in order to avoid long trapping with the initial values that are initialized randomly. The experiments showed that the proposed heuristic is converged more rapidly compared to the existing heuristics by reducing up to 30% of the cycles required to reach the optimal solution using difficult 0/1 KP datasets. Accordingly, the times required for convergence have been reduced significantly in the proposed work compared to the time required by the existing algorithms.[...] Read more.
The research activity considered in this paper concerns about efficient approach for modeling and prediction of air quality. Poor air quality is an environmental hazard that has become a great challenge across the globe. Therefore, ambient air quality assessment and prediction has become a significant area of study. In general, air quality refers to quantification of pollution free air in a particular location. It is determined by measuring different types of pollution indicators in the atmosphere. Traditional approaches depend on numerical methods to estimate the air pollutant concentration and require lots of computing power. Moreover, these methods cannot draw insights from the abundant data available. To address this issue, the proposed study puts forward a deep learning approach for quantification and prediction of ambient air quality. Recurrent neural networks (RNN) based framework with special structured memory cells known as Long Short Term Memory (LSTM) is proposed to capture the dependencies in various pollutants and to perform air quality prediction. Real time dataset of the city Visakhapatnam having a record of 12 pollutants was considered for the study. Modeling of temporal sequence data of each pollutant was performed for forecasting hourly based concentrations. Experimental results show that proposed RNN-LSTM frame work attained higher accuracy in estimating hourly based air ambience. Further, this model may be enhanced by adopting bidirectional mechanism in recurrent layer.[...] Read more.
Power system contingency studies play a pivotal role in maintaining the security and integrity of modern power system operation. However, the number of possible contingencies is enormous and mostly vague. Therefore, in this paper, two well-known clustering techniques namely K-Means (KM) and Fuzzy C-Means (FCM) are used for contingency screening and ranking. The performance of both algorithms is comparatively investigated using IEEE 118-bus test system. Considering various loading conditions and multiple outages, the IEEE 118-bus contingencies have been generated using fast-decoupled power flow (FDPF). Silhouette analysis and fuzzy partition coefficient techniques have been profitably exploited to offer an insight view of the number of centroids. Moreover, the principal component analysis (PCA) has been used to extract the dominant features and ensure the consistency of passed data with artificial intelligence algorithms’ requirements. Although analysis of comparison results showed excellent compatibility between the two clustering algorithms, the FCM model was found more suitable for power system static security investigation.[...] Read more.
Machine Learning is a division of Artificial Intelligence which builds a system that learns from the data. Machine learning has the capability of taking the raw data from the repository which can do the computation and can predict the software bug. It is always desirable to detect the software bug at the earliest so that time and cost can be reduced. Feature selection technique wrapper and filter method is used to find the most optimal software metrics. The main aim of the paper is to find the best model for the software bug prediction. In this paper machine learning techniques linear Regression, Random Forest, Neural Network, Support Vector Machine, Decision Tree, Decision Stump are used and comparative analysis has been done using performance parameters such as correlation, R-squared, mean square error, accuracy for software modules named as ant, ivy, tomcat, berek, camel, lucene, poi, synapse and velocity. Support vector machine outperform as compare to other machine learning model.[...] Read more.
This paper presents a fault tolerant control (FTC) based on Radial Base Function Neural Network (RBFNN) using an adaptive control law for double star induction machine (DSIM) under broken rotor bars (BRB) fault in a squirrel-cage in order to improve its reliability and availability. The proposed FTC is designed to compensate for the default effect by maintaining acceptable performance in case of BRB. The sufficient condition for the stability of the closed-loop system in faulty operation is analyzed and verified using Lyapunov theory. To proof the performance and effectiveness of the proposed FTC, a comparative study within sliding mode control (SMC) is carried out. Obtained results show that the proposed FTC has a better robustness against the BRB fault.[...] Read more.
Recommender Systems are a primary component of online service providers, formulating plenty of information produced by users’ histories (e.g., their procurements, ratings of products, activities, browsing patterns). Recommendation algorithms use this historical information and their contextual data to offer a list of likely items for each user. Traditional recommender algorithms are built on the similarity between items or users.(e.g., a user may purchase the identical items as his nearest user). In the process of reducing limitations of traditional approaches and to improve the quality of recommender systems, a reliability based community method is introduced.This method comprises of three steps: The first step identifies the trusted relations of the current user by allowing trust propagation in the trust network. In next step, the ratings of selected trusted neighborhood are used for predicting the unrated item of current user. The prediction relies only on items that belong to candidate items’ community. Finally the reliability metric is computed to assess the worth of prediction rating. Experimental results confirmed that the proposed framework attained higher accuracy matched to state-of-the-art recommender system approaches.[...] Read more.
This paper aims to systematically examine the literature of machine learning for the period of 1968~2017 to identify and analyze the research trends. A list of journals from well-established publishers ScienceDirect, Springer, JMLR, IEEE (approximately 23,365 journal articles) related to machine learning is used to prepare a content collection. To the best of our information, it is the first effort to comprehend the trend analysis in machine learning research with topic models: Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA), and LDA with Coherent Model (LDA_CM). The LDA_CM topic model gives the highest topic coherence amongst all topic models under consideration. This study provides a scientific ground that helps to overcome the subjectivity of collective opinion. The Mann-Kendall test is used to understand the trend of the topics. Our findings provide indicative of paradigmatic shifts in research methodology of significant patterns of topical prominence and the evolving research areas. It is used to highlight the evolution regarding the previous and recent trends in research topics in the area of machine learning. Understanding such an intellectual structure and future trends will assist the researchers to adopt the divergent developments of this research in one place. This paper analyzes the overall trends of the machine learning research since 1968, based on the latent topics identified in the period of 2007~2017 that may be helpful to the researchers exploring the recommended areas and publish their research articles.[...] Read more.