IJMECS Vol. 9, No. 12, Dec. 2017
Cover page and Table of Contents: PDF (size: 230KB)
Search algorithm, is an efficient algorithm, which performs an important task that locates specific data among a collection of data. Often, the difference among search algorithms is the speed, and the key is to use the appropriate algorithm for the data set. Binary search is the best and fastest search algorithm that works on the principle of ‘divide and conquer’. However, it needs the data collection to be in sorted form, to work properly. In this paper, we study the efficiency of binary search, in terms of execution time and speed up, by evaluating the performance improvement of the combined search algorithms, which are sorted into three different strategies: sequential, multithread, and parallel using message passing interface. The experimental code is written in ‘C language’ and applied on an IMAN1 supercomputer system. The experimental results show that the decision variables are generated from the IMAN1 supercomputer system, which is the most efficient. It varied for the three different strategies, which applies binary search algorithm on merge sort. The improvement in performance evaluation gained by using parallel code, greatly depends on the size of data set used, and the number of processors that the speed-up of the parallel codes on 2, 4, 8, 16, 32, 64, 128, and 143 processors is best executed, using between a 50,000 and 500,000 dataset size, respectively. Moreover, on a large number of processors, parallel code achieves the best speed-up to a maximum of 2.72.[...] Read more.
Data clustering is very active and attractive research area in data mining; there are dozens of clustering algorithms that have been published. Any clustering algorithm aims to classify data points according to some criteria. DBSCAN is the most famous and well-studied algorithm. Clusters are recorded as dense regions separated from each other by spars regions. It is based on enumerating the points in Eps-neighborhood of each point. This paper proposes a clustering method based on k-nearest neighbors and local density of objects in data; that is computed as the total of distances to the most near points that affected on it. Cluster is defined as a continuous region that has points within local densities fall between minimum local density and maximum local density. The proposed method identifies clusters of different shapes, sizes, and densities. It requires only three parameters; these parameters take only integer values. So it is easy to determine. The experimental results demonstrate the superior of the proposed method in identifying varied density clusters.[...] Read more.
In literature, combinatorial optimization problems have been solved using several hybrid strategies. From the principles of software engineering, it is explicit that modelling enables better understanding of the problem’s solution as well as the various parts that constitute the solution. However, the literature reveals that there is less importance attached to modelling the problem’s solution while solving combinatorial optimization problems using hybrid strategies. Therefore, in order to better understand the advantages and significance of using a model based approach in solving such problems, a survey on model based approach and the various properties achieved by modelling has been carried out. A comparison of the algorithm or technique based approach, framework based approach and model based approach is done to better understand the differences between the approaches and their outcomes. From the comparison made between the approaches and the analysis made on the advantages of using a model based approach in solving combinatorial optimization problems using hybrid strategies, it is found that a model based approach gives clear and better understanding of complex problems by making their representation easily modular, understandable, adaptable, verifiable, reliable, customizable, reusable etc. Further, when hybrid strategies are used, and the problems solution is depicted in the form of a model, every part of the model could be implemented using different algorithms and frameworks, thus aiding to identify the optimal algorithm or framework for every part of the model, as well as the most efficient hybrid combination that solves the whole problem in an optimal manner.[...] Read more.
Software development process model plays a key role in developing high quality software. However there is no fit-for-all type of process model exist in software industry. To accommodate some specific project’s needs, process models have to be tailored. Extreme Programming (XP) is a well-known agile model. Due to its simplicity, best practices and disciplined approach researchers tried to mold it for various types of projects and situations. As a result a large number of customized versions of XP are available now days. The aim of this paper is to analyze the latest customizations of XP. For this purpose a systematic literature review is conducted on studies published during 2013 to 2017. This detailed review identifies the objectives of customizations, specific areas in which customizations are done and practices & phases which are being targeted for customizations. This work will not only serve the best for scholars to find the current XP states but will also help researchers to predict the future directions of software development with XP.[...] Read more.
Cloud computing has been emerging out as a new and evolving paradigm with tremendous momentum. It is one of the most acceptable information technology based service which drew the attention of the people not only from the academia, industry but also registered its popularity among the general people. Features like scalability, elasticity, less entry cost, easy to access and subscription and pay per use etc. compel the businesses and end users to migrate themselves from the traditional platform to the cloud based platform. With the wide acceptability of cloud computing based services in the society, people have various myths like some think it as a new name of internet, as it shares many features of the internet while others feel it as another name of existing technology like distributed system, grid computing, and parallel computing etc.. This paper will help in making people aware of this technology by highlighting the points of difference with the existing technology and focusing on the various advantages and area of application which presents the evidence of its popularity and continual growth. The work in the paper will end with the discussion on the status of various issues and shortcomings from which it is suffering along with the present and future scope in this popular area.[...] Read more.
Requirement elicitation and analysis form the focal point in the initial stages of the software development process. Unfortunately, in many software development projects, developers and end-users speak different languages. On one hand, end-users prefer to use natural languages while software developers who are technically perceptive, tend to use conceptual models. This difference in technical knowledge creates a communication gap, a potential cause of poor quality software products or project conflicts. The aim of this paper is to investigate the feasibility of a novel technique that seeks to foster effective elicitation of software requirements and support the implementation of structures that match particular requirements. By combining requirement elicitation and re-usable parts, the proposed solution envisages improvements in the overall software design process leading to enhanced requirement specifications. The novel idea is to incorporate an intermediate step for mapping Unified Modeling Language (UML) to Web Ontology Language (OWL) to enable the addition of ontology languages. The proposed model is validated through a survey. The validation results show that the proposed solution allows software developers to elicit software requirements and implement structures that match certain requirements.[...] Read more.
With the recent growth of Internet-based application services, the concurrent accessing requests arriving at the particular servers offering application services are growing significantly. It is one of the critical strategies that employing load balancing to cope with the massive concurrent accessing requests and improve the access performance is. To build up a better online service to users, load balancing solutions achieve to deal with the massive incoming concurrent requests in parallel through assigning and scheduling the work executed by the members within one server cluster. In this paper, we propose a dynamic feedback-based load balancing methodology. The method analyzes the real-time load and response status of each single cluster member through periodically collecting its work condition information to evaluate the current load pressure by comparing the learned load balancing performance with the preset threshold. In this way, since the load arriving at the cluster could be distributed dynamically with the optimized manner, the load balancing performance could thus be maintained so that the service throughput capacity would correspondingly be improved and the response delay to service requests would be reduced. The proposed result is contributed to strengthening the concurrent access capacity of server clusters. According to the experiment report, the overall performance of server system employing the proposed solution is better.[...] Read more.