IJITCS Vol. 7, No. 5, Apr. 2015
Cover page and Table of Contents: PDF (size: 238KB)
In this paper, feature selection and parameters determination in SVM are cast as an energy minimization procedure. The problem of feature selection and parameters determination is a very difficult problem where the number of feature is very large and where the features are highly correlated. We define the problem of feature selection and parameters determination in SVM as a combinatorial problem and we use a stochastic method that, theoretically, guarantees to reach the global optimum. Several public datasets are employed to evaluate the performance of our approach. Also, we propose to use the DNA Microarray Datasets which are characterized by the large number of features. To validate our approach, we apply it to image classification. The feature descriptors of the images were extracted by using the Pyramid Histogram of Oriented Gradients. The proposed approach was compared with twenty feature selection methods. Experimental results indicate that the classification accuracy rates of the proposed approach exceed those of other approaches.[...] Read more.
Recently, with the emergence of mobile technology and mobile banking, debit and credit transactions have been the most common transactions that are widely spreading, using such technologies. In this research, we specify the concurrent debit and credit transactions in temporal logics such as CTL (Computational Tree Logic) and LTL (Linear-Time Temporal Logic). These specifications describe the infinite histories that may be produced by the iterations of such concurrent transactions infinitely many times. We represent the infinite histories as a model of temporal logics formulae. Then, model checkers, such as NuSMV or SPIN, can carry out exhaustive checks of the correctness of the concurrent debit and credit transactions. Moreover, in this paper, we presume that the serializability condition is too strict. Therefore, a relaxed condition has been suggested to keep the database consistent. Moreover, the relaxed condition is easier to encode into temporal logics formulae.[...] Read more.
The paper reviews dynamic distribution of storage resources among the users in data processing centers. The process of changing memory usage state was revealed to be the process of Markov. The paper proposes the development of stochastic model of the memory and computing usage distribution and the development of probability density functions over practical data. Parameters of probability density functions were defined with the help of stochastic model and practical data. The calculation of the developed model and the parameters of the probability density function is realized dynamically during the ongoing process. At the beginning of each time interval, it is forecasted that the process will be shifted to which state with which maximum probability. The adequacy of the previous forecasts is monitored. Note that, over the time, the quality of the forecast and the level of adequacy increases. The model is used in the virtualization of storage resources usage process and ensures the use of storage resources without wasting. Structure of visualization base is given. The base enables to monitor all stages of the process. Using monitoring base the issues can be resolved to analyze different aspects of the process. Recommendations are given on the use of obtained results.[...] Read more.
Personalization has become an essential feature of mobile services of different domains. On the other hand, users have conflicting needs of personalized experience and privacy. This leads to the question of how to maximize the user’s experience of personalized mobile services while keeping privacy. One possible solution is to provide user’s control of their personal data by keeping their user model on their personal mobile devices. In this way, a user can scrutinize the data while sharing with service providers depending on her/his requirements. The client-side personalization approach can shift the control of privacy to the users and can involve them in personalization process. Transparency and user control can increase the user’s trust. In this paper, we have proposed a solution with the objective of scrutable client-side personalization while keeping the user in control of both privacy and personalization process.[...] Read more.
The selection of attributes becomes more important, but also more difficult, as the size and dimensionality of data sets grows, particularly in bioinformatics. Targeted Projection Pursuit is a dimension reduction technique previously applied to visualising high-dimensional data; here it is applied to the problem of feature selection. The technique avoids searching the powerset of possible feature combinations by using perceptron learning and attraction-repulsion algorithms to find projections that separate classes in the data. The technique is tested on a range of gene expression data sets. It is found that the classification generalisation performance of the features selected by TPP compares well with standard wrapper and filter approaches, the selection of features generalises more robustly than either, and its time efficiency scales to larger numbers of attributes better than standard searches.[...] Read more.
Classification of yeast data plays an important role in the formation of medicines and in various chemical components. If the type of yeast can be recognized at the primary stage based on the initial characteristics of it, a lot of technical procedure can be avoided in the preparation of chemical and medical products. In this paper, the performance two classifying methodologies namely artificial neural network and fuzzy rule base has been compared, for the classification of proteins. The objective of this work is to classify the protein using the selected classifying methodology into their respective cellular localization sites based on their amino acid sequences. The yeast dataset has been chosen from UCI machine learning repository which has been used for this purpose. The results have shown that the classification using artificial neural network gives better prediction than that of fuzzy rule base on the basis of average error.[...] Read more.
This paper illustrates architecture for a multi agent system in healthcare domain. The architecture is generic and designed in form of multiple layers. One of the layers of the architecture contains many proactive, co-operative and intelligent agents such as resource management agent, query agent, pattern detection agent and patient management agent. Another layer of the architecture is a collection of libraries to auto-generate code for agents using soft computing techniques. At this stage, codes for artificial neural network and fuzzy logic are developed and encompassed in this layer. The agents use these codes for development of neural network, fuzzy logic or hybrid solutions such as neuro-fuzzy solution. Third layer encompasses knowledge base, metadata and other local databases. The multi layer architecture is supported by personalized user interfaces for friendly interaction with its users. The framework is generic, flexible, and designed for a distributed environment like the Web; with minor modifications it can be employed on grid or cloud platform. The paper also discusses detail design issues, suitable applications and future enhancement of the work.[...] Read more.
Vehicles saturation in transportation infrastructure causes traffic congestion, accidents, transportation delays and environment pollution. This problem can be resolved with proper management of traffic flow. Existing traffic management systems are challenged on capturing and processing real-time road data from wide area road networks. The main purpose of this study is to address the gap by implementing a mobile phone based Road Information Management System. The proposed system integrates three modules for data collection, storage and information dissemination. The modules works together to enable real-time traffic control. Disseminated information from the system, enables road users to adjust their travelling habit, also it allows the traffic lights to control the traffic in relation to the real-time situation occurring on the road. In this paper the system implementation and testing was performed. The results indicated that there is a possibility to track traffic data using Global Positioning System enabled mobile phones, and after processing the collected data, real-time traffic status was displayed on web interface. This enabled road users to know in advance the situation occurring on the roads and hence make proper travelling decision. Further research should consider adjusting the traffic lights control system to understand the disseminated real-time traffic information.[...] Read more.
Cloud computing is the latest emerging trend in distributed computing, where shared resources are provided to end-users in an on demand fashion that brings many advantages, including data ubiquity, flexibility of access, high availability of resources, and flexibility. In this type of systems many challenges are existed that the task scheduling problem is one of them. The task scheduling problem in Cloud computing is an NP-hard problem. Therefore, many heuristics have been proposed, from low level execution of tasks in multiple processors to high level execution of tasks. In this paper, we propose a new algorithm based on PSO to schedule the tasks in the Cloud. The results demonstrated that the proposed algorithm has a better operation in terms of task execution time, waiting time and missed tasks in comparison of First Come First Served (FCFS), Shortest Process Next (SPN) and Highest Response Ratio Next (HRRN).[...] Read more.
Event management of large international events is attracting interest from researchers, not least due to the potential use of technology to provide support throughout the different stages of the event. Some events, such as major sports or religious events, can involve millions of people from different countries, and require active management to control access (e.g. many popular events can be oversubscribed) and to reduce risks for the participants, local communities and environment. This paper explores the context of a large event - the Hajj pilgrims in Saudi Arabia - which involves up to three million pilgrims, many of whom are international. The paper presents a novel identification system - the Identification Wristband Hajj Permission (IWHP) - which uses encryption technologies and biometric attributes to identify pilgrims, whilst remaining sensitive to the context of the Hajj. The suggested solution has many attributes of relevance that could support its use in other large-crowd events.[...] Read more.