IJISA Vol. 10, No. 7, Jul. 2018
Cover page and Table of Contents: PDF (size: 184KB)
The paper presents the research results concerning an effectiveness evaluation of information technology of gene expression profiles processing for purpose of gene regulatory networks reconstruction. The information technology is presented as a structural block-chart of step-by-step stages of the studied data processing. The DNA microchips of patients, who were investigated on different types of cancer, were used as experimental data. The optimal parameters of data processing algorithm at appropriate stage of this process implementation by quantity criteria of data processing quality were determined during simulation. Validation of the reconstructed gene networks was performed with the use of ROC-analysis by comparison of character of genes interconnection in both the basic network and networks reconstructed based on the obtained biclusters.[...] Read more.
This paper illustrates the new structure of artificial neuron based on root-power means (RPM) for quaternionic-valued signals and also presented an efficient learning process of neural networks with quaternionic-valued root-power means neurons (ℍ-RPMN). The main aim of this neuron is to present the potential capability of a nonlinear aggregation operation on the quaternionic-valued signals in neuron cell. A wide spectrum of aggregation ability of RPM in between minima and maxima has a beautiful property of changing its degree of compensation in the natural way which emulates the various existing neuron models as its special cases. Further, the quaternionic resilient propagation algorithm (ℍ-RPROP) with error-dependent weight backtracking step significantly accelerates the training speed and exhibits better approximation accuracy. The wide spectrums of benchmark problems are considered to evaluate the performance of proposed quaternionic root-power mean neuron with ℍ-RPROP learning algorithm.[...] Read more.
Community of Practice (CoP) is a very rich concept for designing learning systems for adults in relation to their professional development. In particular, for community problem solving. Indeed, Communities of Practice are made up of people who engage in a process of collective learning in a shared domain. The members engage in joint activities and discussions, help each other, and share information. They build relationships that enable them to learn from each other. The most important condition for continuing to learn from a CoP is that the community should live and be active. However, one of the main factors of members demotivation to continue interacting through the CoP is the frequent receipt of a large number of aid requests related to problems that they might not be able to solve. Thing that may lead them to abandon the CoP. In an attempt to overcome this problem, we propose an approach for selecting a group of members who are the most appropriate to contribute to the resolution of a given problem. In this way, the aid request will be sent only to this group. Our approach consists of a static rules-based selection complemented with a dynamic selection based on the ability to solve previous similar problems through analysis of the history of interactions.[...] Read more.
Load balancing is an important task on virtual machines (VMs) and also an essential aspect of task scheduling in clouds. When some Virtual machines are overloaded with tasks and other virtual machines are under loaded, the load needs to be balanced to accomplish optimum machine utilization. This paper represents an existing technique “artificial bee colony algorithm” which shows a low convergence rate to the global minimum even at high numbers of dimensions. The objective of this paper is to propose the integration of artificial bee colony with tabu search technique for cloud computing environment to enhance energy consumption rate. The main improvement is makespan 28.4 which aim to attain a well balanced load across virtual machines. The simulation result shows that the proposed algorithm is beneficial when compared with existing algorithms.[...] Read more.
Analyzing data is a challenging task nowadays because the size of data affects results of the analysis. This is because every application can generate data of massive amount. Clustering techniques are key techniques to analyze the massive amount of data. It is a simple way to group similar type data in clusters. The key examples of clustering algorithms are k-means, k-medoids, c-means, hierarchical and DBSCAN. The k-means and DBSCAN are the scalable algorithms but again it needs to be improved because massive data hampers the performance with respect to cluster quality and efficiency of these algorithms. For these algorithms, user intervention is needed to provide appropriate parameters as an input. For these reasons, this paper presents modified and efficient clustering algorithm. This enhances cluster’s quality and makes clusters more cohesive using domain knowledge, spectral analysis, and split-merge-refine techniques. Also, this algorithm takes care to minimizing empty clusters. So far no algorithm has integrated these all requirements that proposed algorithm does just as a single algorithm. It also automatically predicts the value of k and initial centroids to have minimum user intervention with the algorithm. The performance of this algorithm is compared with standard clustering algorithms on various small to large data sets. The comparison is with respect to a number of records and dimensions of data sets using clustering accuracy, running time, and various clusters validly measures. From the obtained results, it is proved that performance of proposed algorithm is increased with respect to efficiency and quality than the existing algorithms.[...] Read more.
Primary concern of any cloud provider is to improve resource utilization and minimize cost of service. Different mapping relations among virtual machines and physical machines effect on resource utilization, load balancing and cost for cloud data center. Paper addresses the virtual machine placement as optimization problem with resource constraints on CPU, memory and bandwidth. In experimentations, datasets are formed using random data generator. Paper presents random fit algorithm, best fit algorithm based on resource wastage and an evolutionary algorithm- Differential Evolution. Paper presents results of Differential Evolution algorithm with three different mutation approaches. Results show that Differential Evolution algorithm with DE/best/2 mutation operator works efficient than basic DE, best fit and random fit algorithms.[...] Read more.
The techniques of Dynamic Time Warping (DTW) have shown a great efficiency for clustering time series. On the other hand, it may lead to sufficiently high computational loads when it comes to processing long data sequences. For this reason, it may be appropriate to develop an iterative DTW procedure to be capable of shrinking time sequences. And later on, a clustering approach is proposed for the previously reduced data (by means of the iterative DTW). Experimental modeling tests were performed for proving its efficiency.[...] Read more.
The present scenario there is a serious need of scalability for efficient analytics of big data. In order to achieve this, technology like MapReduce, Pig and HIVE came into action but when the question comes to scalability; Apache Spark maintains a great position far ahead. In this research paper, it has designed and developed an improved hybrid distributed collaborative model for filtering recommender engine. Execution time, scalability and robustness of the engine are the three evaluation parameters; has been considered for this present study. The present work keeps an eye on recommender system built with help of Apache Spark. Apart from this, it has been proposed and implemented the bisecting KMeans clustering algorithms. It has discussed about the comparative analysis between KMeans and Bisecting KMeans clustering algorithms on Apache Spark environment.[...] Read more.