IJISA Vol. 8, No. 4, Apr. 2016
Cover page and Table of Contents: PDF (size: 185KB)
Nature-inspired algorithms are recently being appreciated for solving complex optimization and engineering problems. Black hole algorithm is one of the recent nature-inspired algorithms that have obtained inspiration from black hole theory of universe. In this paper, four formulations of multi-objective black hole algorithm have been developed by using combination of weighted objectives, use of secondary storage for managing possible solutions and use of Genetic Algorithm (GA). These formulations are further applied for scheduling jobs on parallel machines while optimizing bi-criteria namely maximum tardiness and weighted flow time. It has been empirically verified that GA based multi-objective Black Hole algorithms leads to better results as compared to their counterparts. Also the use of combination of secondary storage and GA further improves the resulting job sequence. The proposed algorithms are further compared to some of the existing algorithms, and empirically found to be better. The results have been validated by numerical illustrations and statistical tests.[...] Read more.
Recent research have depicted that hidden Markov model (HMM) is a persuasive option for malware detection. However, some advanced metamorphic malware are able to overcome the traditional methods based on HMMs. This proposed approach provides a two-layer technique to overcome these challenges. Malware contain various sequences of opcodes some of which are more important and help detect the malware and the rest cause interference. The important sequences of opcodes are extracted by eliminating partial sequences due to the fact that partial sequences of opcodes have more similarities to benign files. In this method, the sliding window technique is used to extract the sequences. In this paper, HMMs are trained using the important sequences of opcodes that will lead to better results. In comparison to previous methods, the results demonstrate that the proposed method is more accurate in metamorphic malware detection and shows higher speed at classification.[...] Read more.
Orthogonal frequency division multiplexing with Offset Quadrature Amplitude Modulation (OFDM-OQAM) technique has drawn significant interests in recent years. However, most of the existing OFDM peak-to-average ratio (PAPR) reduction schemes cannot be used in the OFDM-OQAM system directly. In this paper, a modified scheme called overlapped segmental Active Constellation Extension (OS-ACE) is proposed to deal with the high PAPR problem specifically in the OFDM-OQAM system. For the proposed OS-ACE scheme, the input signals are divided into a number of overlapped segments and then the ACE operation is processed on each segment. Simulation results show that the modified scheme used in the OFDM-OQAM system can provide better performance than conventional ACE scheme directly used in the OFDM-OQAM system, and even outperforms conventional ACE scheme applied in the OFDM system.[...] Read more.
Today, Massive Open Online Courses (MOOCs) have the potential to enable free online education on an enormous scale. However, a concern often raised about MOOCs is the consistently high drop-out rate of MOOC learners. Although many thousands of learners enroll on these courses, a very small proportion actually complete the course.
This work is at the heart of this issue. It is interested in contributing on multi-agents systems and ontologies to describe the learning preferences and adapt educational resources to learner profile in MOOCs platforms. The primary aim of this work is to exploit the potential of multi-agents systems and ontologies to improve learners’ engagement and motivation in MOOCs platforms and therefore reduce the drop-out rates.
As part of the contribution of this work, the paper proposes a model of Multi-Agent System (MAS), based on ontologies for adapting the learning resources proposed to a learner in a MOOCs platform according to his learning preferences. To model an adequate online course, the determination of learner’s preferences is done through the analysis of learner behavior relying on his indicator MBTI (Myers Briggs Type Indicator). The proposed model integrates the main functionalities of an intelligent tutoring system: profiling, updating of the profile, selection, adaptation and presentation of adequate resources. The architecture of the proposed system is composed on two main agents, four ontologies and a set of modules implemented.
It is well known that multicast routing is combinatorial problem finds the optimal path between source destination pairs. Traditional approaches solve this problem by establishment of the spanning tree for the network which is mapped as an undirected weighted graph. This paper proposes a Modified Ant Colony Optimization (MACO) algorithm which is based on Ant Colony System (ACS) with some modification in the configuration of starting movement and in local updation technique to overcome the basic limitations of ACS such as poor initialization and slow convergence rate. It is shown that the proposed Modified Ant Colony Optimization (MACO) shows better convergence speed and consumes less time than the conventional ACS to achieve the desired solution.[...] Read more.
Modeling the uncertain aspect of the world in ontologies is attracting a lot of interests to ontologies builders especially in the World Wide Web community. This paper defines a way of handling uncertainty in description logic ontologies without remodeling existing ontologies or altering the syntax of existing ontologies modeling languages. We show that the source of vagueness in an ontology is from vague attributes and vague roles. Therefore, to have a clear separation between crisp concepts and vague concepts, the set of roles R is split into two distinct sets〖 R〗_c and R_v representing the set of crisp roles and the set of vague roles respectively. Similarly, the set of attributes A was split into two distinct sets A_c and A_v representing the set of crisp attributes and the set of vague attributes respectively. Concepts are therefore clearly classified as crisp concepts or vague concepts depending on whether vague attributes or vague roles are used in its conceptualization or not. The concept of rough set introduced by Pawlak is used to measure the degree of satisfiability of vague concepts as well as vague roles. In this approach, the cost of reengineering existing ontologies in order to cope with reasoning over the uncertain aspects of the world is minimal.[...] Read more.
Due to advancement in reconfigurable computing, Field Programmable Gate Array (FPGA) has gained significance due to its low cost and fast prototyping. Parallelism, specialization, and hardware level adaptation, are the key features of reconfigurable computing. FPGA is a programmable chip that can be configured or reconfigured by the designer, to implement any digital circuit. One major challenge in FPGA design is the Placement problem. In this placement phase, the logic functions are assigned to specific cells of the circuit. The quality of the placement of the logic blocks determines the overall performance of the logic implemented in the circuits. The Placement of FPGA is a Multi-Objective Optimization problem that primarily involves minimization of three or more objective functions. In this paper, we propose a novel strategy to solve the FPGA placement problem using Non-dominated Sorting Genetic Algorithm (NSGA-II) and Simulated Annealing technique. Experiments were conducted in Multicore Processors and metrics such as CPU time were measured to test the efficiency of the proposed algorithm. From the experimental results, it is evident that the proposed algorithm reduces the CPU consumption time to an average of 15% as compared to the Genetic Algorithm, 12% as compared to the Simulated Annealing, and approximately 6% as compared to the Genetic Annealing algorithm.[...] Read more.
The impact of social Medias such as YouTube, Twitter, and FaceBook etc on the modern world is led to huge growth in the size of video data over the cloud and web. The evolution of smart phones/Tabs could be one of the reasons for increasing in the rate of huge video data over the web. Due to the rapid evolution of web videos over the web, it is becoming difficult to identify popular, non-popular and average popular videos without watching the content of it. To cluster web videos based on their metadata into ‘Popular’, ‘Non-Popular’, and ‘Average Popular’ is one of the complex research questions for the Social Media and Computer Science researchers’. In this work, we propose two effective methods to cluster web videos based on their meta-objects. Large scale web video meta-objects such as- length, view counts, numbers of comments, rating information are considered for knowledge discovery process. The two clustering algorithms-Expectation Maximization (EM) and Distribution Based (DB) clustering are used to form three types of clusters. The resultant clusters are analyzed to find popular video cluster, average popular video cluster and non-popular video clusters. And also the results of EM and DB clusters are compared as a step in the process of knowledge discovery.[...] Read more.