IJITCS Vol. 9, No. 6, Jun. 2017
Cover page and Table of Contents: PDF (size: 229KB)
Technological developments in mobile devices enabled the utilization of geographical data for social networks. Accordingly, location-based social networks have become very attractive. The popularity of location-based social networks has prompted researchers to study recommendation systems for location-based services. There are many studies that develop location recommendation systems using various variables and algorithms. However, articles detailing past and present studies, and making future suggestions, are limited. Therefore, this study aims to thoroughly review the research performed on location recommender systems. For this purpose, topic pairs; "location and recommender system" and "location and recommendation system" were searched in the Web of Knowledge database. Resulting articles were examined in detail with respect to data sources and variables, algorithms, and evaluation techniques used. Thus, the current state of location recommender systems research is summarized and future recommendations are provided for researchers and developers. It is expected that the issues presented in this paper will advance the discussion of next generation location recommendation systems.[...] Read more.
Wireless Sensor Network, a group of specialized sensors with a communication infrastructure for monitoring and controlling conditions at diverse locations, is a recent technology which is getting popularity day by day. Besides, cloud computing is a type of high-performance computing that uses a network of remote servers which simultaneously provides the service to store, manage and process data rather than a local server or personal computer. An architecture called sensor-cloud is also providing good services by combining the capabilities from both ends. In order to provide such services, a large volume of sensor network data needs to be transported to cloud gateway with a high amount of bandwidth and time requirement. In this paper, we have proposed an efficient sensor-cloud communication approach that minimizes the enormous bandwidth and time requirement by using statistical classification based on machine learning as well as compression using deflate algorithm with a minimal loss of information. Experimental results describe the overall efficiency of the proposed method over the traditional and related research.[...] Read more.
Software process modelling has recently become an area of interest within both academia and industry. It aims at defining and formalizing the software process in the form of formal rigorous models. A software process modelling formalism presents the language or notation in which the software process is defined and formalized. Several software process modelling formalisms have been introduced lately, however, they have failed to gain the attention of the industry. One major objective of formalizing the software process that has ever been an issue of research, is to enhance the understanding and communication among software process users. To achieve this aim, a modelling formalism has to offer a common language to be well-understood by all software process users. BPMN presents a graphical-based widely accepted standard formalism, mainly aimed at business process modelling. This paper illustrates a software process modelling formalism based upon BPMN specifications for representing the software process, named as, SP2MN. The paper also demonstrates the applicability and evaluation of the proposed formalism by; utilizing the standard ISPW-6 benchmark problem, in addition to comparing the expressiveness of SP2MN with similar software process modelling formalisms. The evaluations prove that SP2MN contributes in enhancing software process formalization. SP2MN, accordingly, can be used as a standard software process modelling formalism.[...] Read more.
Density based Subspace Clustering algorithms have gained their importance owing to their ability to identify arbitrary shaped subspace clusters. Density-connected SUBspace CLUstering(SUBCLU) uses two input parameters namely epsilon and minpts whose values are same in all subspaces which leads to a significant loss to cluster quality. There are two important issues to be handled. Firstly, cluster densities vary in subspaces which refers to the phenomenon of density divergence. Secondly, the density of clusters within a subspace may vary due to the data characteristics which refers to the phenomenon of multi-density behavior. To handle these two issues of density divergence and multi-density behavior, the authors propose an efficient algorithm for generating subspace clusters by appropriately fixing the input parameter epsilon. The version1 of the proposed algorithm computes epsilon dynamically for each subspace based on the maximum spread of the data. To handle data that exhibits multi-density behavior, the algorithm is further refined and presented in version2. The initial value of epsilon is set to half of the value resulted in the version1 for a subspace and a small step value 'delta' is used for finalizing the epsilon separately for each cluster through step-wise refinement to form multiple higher dimensional subspace clusters. The proposed algorithm is implemented and tested on various bench-mark and synthetic datasets. It outperforms SUBCLU in terms of cluster quality and execution time.[...] Read more.
Placement of methods within classes is one of the most important design activities for any object oriented application to optimize software modularization. To enhance interactions among modularized components, recommendation of move method refactorings plays a significant role through grouping similar behaviors of methods. It is also used as a refactoring technique of feature envy code smell by placing methods into correct classes from incorrect ones. Due to this code smell and inefficient modularization, an application will be tightly coupled and loosely cohesive which reflect poor design. Hence development and maintenance effort, time and cost will be increased. Existing techniques deals with only non-static methods for refactoring the code smell and so are not generalized for all types of methods (static and non-static). This paper proposes an approach which recommends 'move method' refactoring to remove the code smell as well as enrich modularization. The approach is based on conceptual similarity (which can be referred as similar behavior of methods) between a source method and methods of target classes of an application. The conceptual similarity relies on both static and non-static entities (method calls and used attributes) which differ the paper from others. In addition, it compares the similarity of used entities by the source method with used entities by methods in probable target classes. The results of a preliminary empirical evaluation indicate that the proposed approach provides better results with average precision of 65% and recall of 63% after running it on five well-known open projects than JDeodorant tool (a popular eclipse plugin for refactorings).[...] Read more.
Materialized view is a database object used to store the results of a query set. It is used to avoid the costly processing time that is required to execute complex queries involving aggregation and join operations. Materialized view may be associated with the operations of a data warehouse. Data mining is a technique to extract knowledge from a data warehouse and the incremental data mining is another process that periodically updates the knowledge that has been already identified by a data mining process. This happens when a new set of data gets added with the existing set. This paper proposes a method to apply the materialized view in incremental data mining.[...] Read more.
This paper presents a selected short review on Cloud Computing by explaining its evolution, history, and definition of cloud computing. Cloud computing is not a brand-new technology, but today it is one of the most emerging technology due to its powerful and important force of change the manner data and services are managed. This paper does not only contain the evolution, history, and definition of cloud computing, but it also presents the characteristics, the service models, deployment models and roots of the cloud.[...] Read more.
Linux tracing tools are used to record the events running in the background on the system. But these tools lack to analyze the log data. In the field of Artificial Intelligence Cohort Intelligence (CI) is recently proposed technique, which works on the principle of self-learning within a cohort. This paper presents an approach to optimize the performance of the system by tracing the system, then extract the information from trace data and pass it to cohort intelligence algorithm. The output of cohort intelligence algorithm shows, how the load of the system should be balanced to optimize the performance.[...] Read more.