IJISA Vol. 9, No. 4, Apr. 2017
Cover page and Table of Contents: PDF (size: 936KB)
The method of construction adaptive observers for linear time-varying dynamical systems with one input and an output is offered. Adaptive algorithms for identification are designed. Adaptive algorithms not realized as an adaptive system contains parametric uncertainty (PU). Realized adaptive algorithms of identification parameters system are offered. They on the procedure of the estimation PU and algorithm of signal adaptation are based. The algorithm of velocity change system parameters estimation is proposed. Estimations PU and its misalignments are obtained. Boundedness of trajectories an adaptive system is proved. Exponential stability conditions of the adaptive system are obtained. Iterative procedure of construction a parametric restrictions area is proposed. Simulation results have confirmed the efficiency of the method construction an adaptive observer.[...] Read more.
Mining negations from electronic narrative medical documents is one of the prominent data mining applications. Since medical documents are freely written, it is impossible to consider all possible sentence structures in advance and so frequent update of mining algorithms is inevitable. Unfortunately most of the proposed algorithms in the literature are too complex to be easily updated. Besides, most of them cannot be easily ported to other natural languages. The simple NegEx algorithm utilizes only two regular expressions and sets of terms to mine negations from narrative medical documents and so does not suffer from these shortcomings. Meanwhile, it has shown impressive mining results and so it is the most widely adopted algorithm. This paper proposes the Negation Mining (NegMiner) tool to address some of the shortcomings of the NegEx algorithm. The NegMiner exploits some basic syntactic and semantic information to deal with contiguous and multiple negations. It is a user-friendly tool that facilitates the task of knowledge base update and the task of document analysis through the use of PDF files. This also makes it able to deal with the existence of a medical finding several times in a single sentence. Experimental results have shown the superiority of the mining results of the NegMiner in comparison to the simulated NegEx algorithm.[...] Read more.
This paper presents binary differential evolution based optimal reporting cell planning (RCP) for location management in wireless cellular networks. The significance of mobile location management (MLM) in wireless communication has evolved drastically due to tremendous rise in the number of mobile users with the constraint of limited bandwidth. The total location management cost involves signaling cost due to location registration and location search and a trade-off between these two gives optimal location management cost. The proposed binary differential evolution (BDE) algorithm is used to determine the optimal reporting cell planning configuration such that the overall mobility management cost is minimized. Evidently, from the simulation result the proposed technique works well for the reference networks in terms of optimal cost and convergence speed. Further the applicability of the BDE is also validated for the realistic network of BSNL (Bharat Sanchar Nigam Limited), Odisha.[...] Read more.
Artificial neural network based model is estimated for modified shaped circular patch antenna. The Levenberg Marquardt (LM) algorithm is used to train the network, different antenna parameters in the X and Ku band are taken as input and delivers antenna dimensions as output. The dimensions obtained from estimated neural network model closely agrees the simulated results over the X and Ku band for FR4 epoxy substrate with 1.5 mm thickness. The simulation of microstrip patch antenna is carried out using Ansoft HFSS simulation software and the analysis of neural network model results are carried out using MATLAB. Thus, the estimated model can be used to obtain the antenna dimensions for circular patch antenna.[...] Read more.
The objective of this research is to improve Arabic text documents classification by combining different classification algorithms. To achieve this objective we build four models using different combination methods.
The first combined model is built using fixed combination rules, where five rules are used; and for each rule we used different number of classifiers. The best classification accuracy, 95.3%, is achieved using majority voting rule with seven classifiers, and the time required to build the model is 836 seconds.
The second combination approach is stacking, which consists of two stages of classification. The first stage is performed by base classifiers, and the second by a meta classifier. In our experiments, we used different numbers of base classifiers and two different meta classifiers: Naïve Bayes and linear regression. Stacking achieved a very high classification accuracy, 99.2% and 99.4%, using Naïve Bayes and linear regression as meta classifiers, respectively. Stacking needed a long time to build the models, which is 1963 seconds using naïve Bayes and 3718 seconds using linear regression, since it consists of two stages of learning.
The third model uses AdaBoost to boost a C4.5 classifier with different number of iterations. Boosting improves the classification accuracy of the C4.5 classifier; 95.3%, using 5 iterations, and needs 1175 seconds to build the model, while the accuracy is 99.5% using 10 iterations and requires 1966 seconds to build the model.
The fourth model uses bagging with decision tree. The accuracy is 93.7% achieved in 296 seconds when using 5 iterations, and 99.4% when using 10 iteration requiring 471 seconds. We used three datasets to test the combined models: BBC Arabic, CNN Arabic, and OSAC datasets. The experiments are performed using Weka and RapidMiner data mining tools. We used a platform of Intel Core i3 of 2.2 GHz CPU with 4GB RAM.
The results of all models showed that combining classifiers can effectively improve the accuracy of Arabic text documents classification.
Bangladesh is a developing country. But in few recent years this country is going to be turned as digitalized. The first condition of being digitalization is the whole communication system of the country have to be developed tremendously. If we notice about the communication system, then Social Networking Sites can be a platform of revolution. This study is based on the perspective of Bangladesh on Social Networking Sites(SNS). In Bangladesh, Social Networking Sites are getting popular tremendously. The Social Networking Sites(SNS) like twitter and Facebook gives people in general the useful platform for disclosure of their thinking and ideas. This research paper is made with an aim to represent the positive and negative impact of Social Networking Sites(SNS) and we will also try to recommend few possible solutions to overcome these problems.[...] Read more.
Data is one of the most important and vital aspect of different activities in today's world. Therefore vast amount of data is generated in each and every second. A rapid growth of data in recent time in different domains required an intelligent data analysis tool that would be helpful to satisfy the need to analysis a huge amount of data. Map Reduce framework is basically designed to process large amount of data and to support effective decision making. It consists of two important tasks named as map and reduce. Optimization is the act of achieving the best possible result under given circumstances. The goal of the map reduce optimization is to minimize the execution time and to maximize the performance of the system. This survey paper discusses a comparison between different optimization techniques used in Map Reduce framework and in big data analytics. Various sources of big data generation have been summarized based on various applications of big data.The wide range of application domains for big data analytics is because of its adaptable characteristics like volume, velocity, variety, veracity and value .The mentioned characteristics of big data are because of inclusion of structured, semi structured, unstructured data for which new set of tools like NOSQL, MAPREDUCE, HADOOP etc are required. The presented survey though provides an insight towards the fundamentals of big data analytics but aims towards an analysis of various optimization techniques used in map reduce framework and big data analytics.[...] Read more.
This paper presents a distribution planning on a geographical network, using improved K-means clustering algorithm and is compared with the conventional Euclidean distance based K-means clustering algorithm. The distribution planning includes optimal placement of substation, minimization of expansion cost, optimization of network parameters such as network topology, routing of single/multiple feeders, and reduction in network power losses. The improved K-means clustering is an iterative weighting factor based optimization algorithm which locates the substation optimally and improves the voltage drop at each node. For feeder routing shortest path based algorithm is proposed and the modified load flow method is used to calculate the active and reactive power losses in the network. Simulation is performed on 54 nodes based geographical network with load points and the results obtained show significant power loss minimization as compared to the conventional K-means clustering algorithm.[...] Read more.