IJMECS Vol. 10, No. 7, Jul. 2018
Cover page and Table of Contents: PDF (size: 230KB)
One of the key indicators for testing code quality is the level of modularity. Nevertheless, novice programmers do not always stick to writing modular code. In this study, we aim to examine the circumstances in which novice programmers decide to do so. To address this aim, two student groups, twenty each, were given a programming assignment, each in a different set-up. The first group was given the assignment in several stages, each add complexity to the previous one, while the second group was given the entire assignment at once. The students' solutions were analyzed using the dual-process theory, cognitive dissonance theory and content analysis methods to examine the extent of modularity. The analysis revealed the following findings: (1) In the first group, a minor increase in the number of modular solutions was found while they progressed along the stages; (2) The number of modular solutions of the second group was higher than of the first group. Analysis of students' justifications for lack of modularity in the first group revealed the following. The first stages of the problem were perceived as rather simple hence many students did not find any reason to invest in designing a modular solution. When the assignment got complex in the following stages, the students realized that a modular solution would fit better, hence a cognitive dissonance was raised. Nevertheless, many of them preferred to decrease the dissonance by continuing their course of non-modular solution instead of re-designing a modular new one. Students of both groups also attributed their non-modular code to lack of explicit criteria for the evaluation of the code quality that lead them to focus on functionality alone.[...] Read more.
Whereas ontologies are formal knowledge representations, conveying a shared understanding of a given domain, databases are a mature technology that describes specifications for the storage, retrieval, organization, and processing of data in information systems to ensure data integrity. Ontologies offer the functionality of conceptual modeling while complying with the web constraints regarding publication, querying and annotation, as well as the capacity of formality and reasoning to enable data consistency and checking. Ontologies converted to databases could exploit the maturity of database technologies, and databases converted to ontologies could utilize ontology technologies to be more used in the context of the semantic web. This work aims to propose a generic approach that enables converting a relational database into an ontology and vice versa. A tool based on this approach has been implemented as a proof of a concept.[...] Read more.
Clustering is the technique of finding useful patterns in a dataset by effectively grouping similar data items. It is an intense research area with many algorithms currently available, but practically most algorithms do not deal very efficiently with noise. Most real-world data are prone to containing noise due to many factors, and most algorithms, even those which claim to deal with noise, are able to detect only large deviations as noise. In this paper, we present a data-clustering method named SIDNAC, which can efficiently detect clusters of arbitrary shapes, and is almost immune to noise – a much desired feature in clustering applications. Another important feature of this algorithm is that it does not require apriori knowledge of the number of clusters – something which is seldom available.[...] Read more.
Whale optimization algorithm is a newly proposed bio-inspired optimization technique introduced in 2016 which imitates the hunting demeanor of hump-back whales. In this paper, to enhance solution accuracy, reliability and convergence speed, we have introduced some modifications on the basic WOA structure. First, a new control parameter, inertia weight, is proposed to tune the impact on the present best solution, and an improved whale optimization algorithm (IWOA) is obtained. Second, we assess IWOA with various transfer functions to convert continuous solutions to binary ones. The pro-posed algorithm incorporated with the K-nearest neighbor classifier as a feature selection method for identifying feature subset that enhancing the classification accuracy and limiting the size of selected features. The proposed algorithm was compared with binary versions of the basic whale optimization algorithm, particle swarm optimization, genetic algorithm, antlion optimizer and grey wolf optimizer on 27 common UCI datasets. Optimization results demonstrate that the proposed IWOA not only significantly enhances the basic whale optimization algorithm but also performs much superior to the other algorithms.[...] Read more.
In the recent past, there has been a rapid increase in the number of vehicles and diversification of road networks worldwide. The biggest challenge now lies on how to monitor and analyse behaviours of vehicle drivers as a catalyst to road safety. Driver behaviour depends on the state and nature of the road, the state of the driver, vehicle conditions, and actions of other road users among other factors. This paper illustrates the ability of Dynamic Bayesian Networks towards determination of driving styles with respect to acceleration, cornering and braking patterns. Bayesian Networks are probabilistic graphical models that map a set of variables and their conditional dependencies. Sample test results showed that the 2-Time-slice Bayesian Network model is suitable for generation of driver profiles using only four GPS data parameters namely speed, altitude, direction and signal strength against time. The model classifies driver profiles into two sets of observations: driver behaviour and nature of operational environment. Adoption of the model could offer a cost effective, easy to implement and use solution that could find many applications in vehicle driver recruiting firms, vehicle insurance companies and transport and road safety authorities among other sectors.[...] Read more.
The fact that reflects the cancer research consequences shows that still there are improvements that should be investigated in the stream of cancer in future. This leads the researchers to actively involve further in cancer research field. As an invention, a hybrid machine learning method is proposed in this study where two filters are assessed along with a wrapper approach. Typically, filters prioritize the features while, wrappers contribute in subset identification. Though both filters and wrappers exist independently, the excellent results they produce when applied subsequently. The wrapper-filter combination plays a major role in feature selection. Yet, incorporating with a best strategy for feature space analysis is crucial in this concern. Thus, we introduce the Evolutionary Algorithm in the proposed study to search through the feature space for informative gene subset selection. Though there are several gene selection approaches for cancer classification, many of them suffer from law classification accuracy and huge gene subset for prediction. Hence, we propose Evolutionary Algorithm to overcome this problem. The proposed approach is evaluated on five microarray datasets, where three out of them provide 100% accuracy. Regardless the number of genes selected, both filters provide the same performance throughout the datasets used. As a consequence, the Evolutionary Algorithm in feature space search is highlighted for its performance in gene subset selection.[...] Read more.
High Blood Pressure (HBP) is a state in the biological system of human beings developed due to physical and psychological changes. Nowadays, it is a most prevalent problem in human beings irrespective of age, place, and profession. The HBP victims are increasing rapidly across the globe. HBP is undiagnosed in the majority of the patients because most of the affected people are not aware of it. To overcome this problem, this paper proposes a new approach that uses ABF (Arterial Blood Flow)-function to predict a person is prone to HBP. In this approach, the impact factor for each attribute is calculated based on the attribute value. Both attribute value and corresponding impact factor are used by ABF function to predict a person is prone to HBP. We experimented proposed approach on real-time data set, which consists of 1100 patient records in the age group between 18 and 65. Our approach outperforms regarding predictive accuracy over j48, Naive Bayes and Rule-based classifiers.[...] Read more.