IJMECS Vol. 11, No. 12, Dec. 2019
Cover page and Table of Contents: PDF (size: 762KB)
Persistent and quality graduation rates of students are increasingly important indicators of progressive and effective educational institutions. Timely analysis of students’ data to guide instructors in the provision of academic interventions to students who are at risk of performing poorly in their courses or dropout is vital for academic achievement. In addition there is need for performance attributes relationship mining for the generation of comprehensible patterns. However, there is dearth in pieces of knowledge relating to predicting students’ performance from patterns. This therefore paper adopts hierarchical cluster analysis (HCA) to analyze students’ performance dataset for the discovery of optimal number of fail courses clusters and partitioning of the courses into groups, and association rule mining for the extraction of interesting course-status association. Agglomerative HCA with Ward’s linkage method produced the best clustering structure (five clusters) with a coefficient of 92% and silhouette width 0.57. Apriori algorithm with support (0.5%), confidence (80%) and lift (1) thresholds were used in the extraction of rules with student’s status as consequent. Out of the twenty one courses offered by students in the first year, seven courses frequently occur together as failed courses, and their impact on the respective students’ performance status were assessed in the rules. It is conjectured that early intervention by the instructors and management of educational activities on these seven courses will increase the students’ learning outcomes leading to increased graduation rate at minimum course duration, which is the overarching objective of higher educational institutions. As further work, the integration of other machine learning and nature inspired tools for the adaptive learning and optimization of rules respectively would be performed.[...] Read more.
Testing is one of the crucial activities of software development life cycle which ensures the delivery of high quality product. As software testing consumes significant amount of resources so, if, instead of all software modules, only those are thoroughly tested which are likely to be defective then a high quality software can be delivered at lower cost. Software defect prediction, which has now become an essential part of software testing, can achieve this goal. This research presents a framework for software defect prediction by using feature selection and ensemble learning techniques. The framework consists of four stages: 1) Dataset Selection, 2) Pre Processing, 3) Classification, and 4) Reflection of Results. The framework is implemented on six publically available Cleaned NASA MDP datasets and performance is reflected by using various measures including: F-measure, Accuracy, MCC and ROC. First the performance of all search methods within the framework on each dataset is compared with each other and the method with highest score in each performance measure is identified. Secondly, the results of proposed framework with all search methods are compared with the results of 10 well-known supervised classification techniques. The results reflect that the proposed framework outperformed all of other classification techniques.[...] Read more.
This article presents Differential Evolution (DE) to determine optimum fractional proportional-integral-derivative (FPID) controller parameters for model decrease of an automatic voltage controller (AVR) system. The suggested strategy is a straightforward yet efficient algorithm with balanced capacities for exploration and exploitation to efficiently search for space alternatives to find the best outcome. The algorithm's simplicity offers quick and high-quality tuning of optimum parameters for the FPID controller. A time domain performance index is used to validate the suggested DE-FPID controller. The proposed technique was discovered productive and hearty in improving the transient response of AVR framework contrasted with the PID controllers based - Ziegler-Nichols (ZN), FPID based - Invasive Weed Optimization (IWO),FPID based-Sine-Cosine algorithmn (SCA) tuning strategies.[...] Read more.
The paper considers various aspects of the interaction and movement of bodies. The stability of the Solar system by the example of the evolution of the Mars orbit for 100 million years is shown. The optimal motion of the spacecraft to the vicinity of the Sun is considered. The results of an exact solution to the problem of the interaction of N bodies, which form a rotating structure, are presented. It is shown the evolution of two asteroids: Apophis and 1950DA, as well as ways to transform them into Earth satellites. The cause of the excess rotation of the Mercury perihelion is explained. The examples of the simulation of globular star clusters are given. As a result of the interaction of the bodies of the Solar System, the parameters of the orbital and rotational motions of the Earth change. This change leads to a change in the distribution of solar heat over the surface of the Earth. In contrast to the works of previous authors, a new understanding of the evolution of the orbital and rotational motions of the Earth has been obtained. In particular, it has been established that the Earth's orbit and its axis of rotation precess relative to different directions in space. The paper compares the distribution of the amount of the solar heat, i.e. insolation, on the surface of the Earth in different epochs. The insolation periods of climate change are shown. They coincide with the known changes of the paleoclimate. The changes in insolation for different time intervals, including for 20 million years ago, are given. Changes in insolation in the contemporary epoch and in the next million years are also shown. Sunrises and sunsets are also evolving. The duration of the polar days and nights in different epochs is given in the distribution by latitude. The paper popularly acquaints readers with the latest scientific results and is useful for students and graduate students to choose topics for their term papers and dissertations.[...] Read more.
Due to the limitations of a physical memory, it is quite difficult to analyze and process big datasets. The Hadoop MapReduce algorithm has been widely used to process and mine such large sets of data using the Map and Reduce functions. The main contribution of this paper is to implement MapReduce programming algorithm to analyze large set of fingerprint images which cannot be normally processed due to a limited physical memory in order to find the features of these images at once. At first, the images are maintained in an image data store in order to be preprocessed and to extract the features for the biometric trait of each user, and then store them in a database. The algorithm preprocesses and extracts the features (ridges and bifurcation) from multiple fingerprint images at the same time. The extracted points are detected using the Crossing Number (CN) concept based on the proposed algorithm. It is validated using data taken from the National Institute of Standards and Technology’s (NIST) Special Database 4. The data consist of fingerprint images for many users. Our experiments on these large set of fingerprint images shows a significant reducing in the processing time to a nearly half when extracting the features of these images using our proposed MapReduce approach.[...] Read more.