IJITCS Vol. 9, No. 4, Apr. 2017
Cover page and Table of Contents: PDF (size: 281KB)
A wireless sensor network's sensor nodes have scarce resources, are exposed to the open environment, and use wireless communication. These features make the network vulnerable to physical capture and security attacks, therefore adversaries attempt various attacks such as false report injection attacks. A false report injection attack generates a false alarm by forwarding a false report to the base station. It confuses a user and lowers the reliability of the system. In addition, it leads to depletion of the node energy in the process of delivering a false report. A dynamic en-route filtering scheme performs detection in the data transfer process, but it incurs unnecessary energy loss in a continuous attack situation. In this paper, in order to solve this problem, a scheme is proposed for determining whether or not to redistribute keys at execution. The proposed scheme saves energy by detecting false reports at an earlier hop than the existing scheme by using fuzzy logic and the feature of a loaded secret key of each node in the key pre-distribution phase. Furthermore, it improves the detection performance with an appropriate re-distribution of the key. Experimental results show up to 52.33% energy savings and an improved detection performance of up to 18.57% compared to the existing scheme.[...] Read more.
Information Technology industry has been using the traditional relational databases for about 40 years. However, in the most recent years, there was a substantial conversion in the IT industry in terms of commercial applications. Stand-alone applications have been replaced with electronic applications, committed servers with various appropriate servers and devoted storage with system storage. Lower fee, flexibility, the model of pay-as-you-go are the main reasons, which caused the distributed computing are turned into reality. This is one of the most significant revolutions in Information Technology, after the emergence of the Internet. Cloud databases, Big Table, Sherpa, and SimpleDB are getting to be more familiar to communities. They highlighted the obstacles of current social databases in terms of usability, flexibility, and provisioning. Cloud databases are essentially employed for information-escalated applications, such as storage and mining of huge data or commercial data. These applications are flexible and multipurpose in nature. Numerous value-based information administration applications, like banking, online reservation, e-trade and inventory administration, etc. are produced. Databases with the support of these types of applications have to include four important features: Atomicity, Consistency, Isolation, and Durability (ACID), although employing these databases is not simple for using in the cloud. The goal of this paper is to find out the advantages and disadvantages of databases widely employed in cloud systems and to review the challenges in developing cloud databases.[...] Read more.
Apriori algorithm is one of the most popular data mining techniques, which is used for mining hidden relationship in large data. With parallelism, a large data set can be mined in less amount of time. Apart from the costly distributed systems, a computer supporting multi core environment can be used for applying parallelism. In this paper an improved Apriori algorithm for multi-core environment is proposed.
The main contributions of this paper are:
•An efficient Apriori algorithm that applies data parallelism in multi-core environment by reducing the time taken to count the frequency of candidate item sets.
•The performance of proposed algorithm is evaluated for multiple cores on basis of speedup.
•The performance of the proposed algorithm is compared with the other such parallel algorithm and it shows an improvement by more than 15% preliminary experiment.
A power system load forecasting method using wavelet neural network with a process of decomposition-forecasting-reconstruction and error analysis based on SPSS is presented in this paper. First of all, the load sequence is decomposed by wavelet transform into each scale wavelet coefficients of navigation. In this step, choosing an appropriate wavelet function decomposition of load is needed. In this paper, by comparing the signal-to-noise ratio (SNR) and the mean square error (MSE) of the different wavelet functions for load after processing; It is concluded that the most suitable wavelet function for the load sequence in this paper is db4 wavelet function. The scale of wavelet coefficients is obtained by load wavelet decomposition. In the process of wavelet coefficient of processing, the db4 wavelet function is used to decompose the original sequence in 3 scales; High frequency and low frequency wavelet coefficient is got through setting threshold. Secondly, these wavelet coefficients are used as the training sample of the input to the nonlinear regression neural network for processing, and then the forecasting result is obtained by the wavelet reconstruction. Finally, the actual and forecasting values are compared by SPSS with a comprehensive statistical charting capability, which is able to draw beautiful charts and is easy to edit.[...] Read more.
Over the last few years, the face of traditional learning has changed significantly, due to the emergence of the web. Consequently several learning systems have emerged such as computer-based learning, web-based learning among others, meeting different kinds of educational needs of the learners and educators as well. E-learning systems allow educators, distribute information, create content material, prepare assignments, engage in discussions, and manage distance classes among others. They accumulate a huge amount of data as a result of learner’s interaction with the site. This data can be used to find students’ learning pattern based on which appropriate courses could be recommended to them. However existing approaches of recommending courses to learner offer the same course to all the learners irrespective of their knowledge and skill level which results in decreasing their academic performance. This paper proposes an architecture for the recommendation of courses to a learner based on his/her profile. The profile of a learner is created by applying k-means algorithm to learner’s interaction data in moodle. The results show that the non active learners should not be recommended advanced courses if they have obtained poor marks and are not active in the concern course. In the initial stage we discover learners’ performance in data mining course which will further be extended to other courses as well.[...] Read more.
Interactions appearing regularly in a network may be disturbed due to the presence of noise or random occurrence of events at some timestamps. Ignoring them may devoid us from having better understanding of the networks under consideration. Therefore, to solve this problem, researchers have attempted to find quasi/quasi-regular patterns in non-weighted dynamic networks. To the best of our knowledge, no work has been reported in mining such patterns in weighted dynamic networks. So, in this paper we present a novel method which mines maximal quasi regular patterns on structure (MQRPS) and maximal quasi regular patterns on weight (MQRPW) in weighted dynamic networks. Also, we have provided a relationship between MQRPW and MQRPS which facilitates in the running of the proposed method only once, even when both are required and thus leading to reduction in computation time. Further, the analysis of the patterns so obtained is done to gain a better insight into their nature using four parameters, viz. modularity, cliques, most commonly used centrality measures and intersection. Experiments on Enron-email and a synthetic dataset show that the proposed method with relationship and analysis is potentially useful to extract previously unknown vital information.[...] Read more.
Feature Selection (FS) is an important process to find the minimal subset of features from the original data by removing the redundant and irrelevant features. It aims to improve the efficiency of classification algorithms. Rough set theory (RST) is one of the effective approaches to feature selection, but it uses complete search to search for all subsets of features and dependency to evaluate these subsets. However, the complete search is expensive and may not be feasible for large data due to its high cost. Therefore, meta-heuristics algorithms, especially Nature Inspired Algorithms, have been widely used to replace the reduction part in RST. This paper develops a new algorithm for Feature Selection based on hybrid Binary Cuckoo Search and rough set theory for classification on nominal datasets. The developed algorithm is evaluated on five nominal datasets from the UCI repository, against a number of similar NIAs algorithms. The results show that our algorithm achieves better FS compared to two known NIAs in a lesser number of iterations, without significantly reducing the classification accuracy.[...] Read more.