IJITCS Vol. 7, No. 3, Feb. 2015
Cover page and Table of Contents: PDF (size: 243KB)
Distributed systems consist of several management sites which have different resource sharing levels. Resources can be shared among inner site and outer site processes at first and second level respectively. Global coordinator should exist in order to coordinate access to multi site’s shared resources. Moreover; some other coordinators should manage access to inner site’s shared resources so that exerting appropriate coordinator election algorithms in each level is crucial to achieve most efficient system. In this paper a hierarchical distributed election algorithm is proposed which eliminates single point of failure of election launcher. Meanwhile traffic is applied to network at different times and the number of election messages is extremely decreased as well which applies more efficiency especially in high traffic networks. A standby system between coordinators and their first alternative is considered to induct less wait time to processes which want to communicate with coordinator.[...] Read more.
The number of websites on the Internet is growing randomly, thanks to HTML language. Consequently, a diversity of information is available on the Web, however, sometimes the content of it may be neither valuable nor trusted. This leads to a problem of a credibility of the existing information on these Websites. This paper investigates aspects affecting on the Websites credibility and then uses them along with dominant meaning of the query for improving information retrieval capabilities and to effectively manage contents. It presents a design and development of a credible mechanism that searches Web search engine and then ranks sites according to its reliability. Our experiments show that the credibility terms on the Websites can affect the ranking of the Web search engine and greatly improves retrieval effectiveness.[...] Read more.
Testability is a property of software which introduces with the purpose of forecasting efforts need to test the programs. Software quality is the most important factor in the development of software, which can be depend on many quality attributes. The absence of testability is responsible for higher maintenance and testing effort. In this paper Fuzzy Logic is used to ascertain the relationship between the factors that affects the software testability. This paper presents the application of fuzzy logic the assessment of software testability. A new model is proposed using fuzzy inference system for tuning the performance of software testability. Aspect-oriented metrics are taken i.e. Separation of Concern (SoC), cohesion, size and coupling. These metrics are closely related to the factors i.e. Controllability, Observability, Built in Test Capability, Understandability and Complexity. These factors are independent to each other and used for accessing software testability. A Triangular Membership Function (TriMF) is applied on these factors which defined in Mamdani Fuzzy Inference System in MATLAB. In this paper, we have defined and evaluated factors combination which is used for the assessment of software testability for as well as aspect oriented software.[...] Read more.
Facial Expression is a key component in evaluating a person's feelings, intentions and characteristics. Facial Expression is an important part of human-computer interaction and has the potential to play an equal important role in human-computer interaction. The aim of this paper is bring together two areas in which are Artificial Neural Network (ANN) and K-Nearest Neighbor (K-NN) applying for facial expression classification. We propose the ANN_KNN model using ANN and K-NN classifier. ICA is used to extract facial features. The ratios feature is the input of K-NN classifier. We apply ANN_KNN model for seven basic facial expression classifications (anger, fear, surprise, sad, happy, disgust and neutral) on JAFEE database. The classifying precision 92.38% has been showed the feasibility of our proposal model.[...] Read more.
This paper presents an optimized speech compression algorithm using discrete wavelet transform, and its real time implementation on fixed-point digital signal processor (DSP). The optimized speech compression algorithm presents the advantages to ensure low complexity, low bit rate and achieve high speech coding efficiency, and this by adding a voice activity detector (VAD) module before the application of the discrete wavelet transform. The VAD module avoids the computation of the discrete wavelet coefficients during the inactive voice signal. In addition, a real-time implementation of the optimized speech compression algorithm is performed using fixed-point processor. The optimized and the original algorithms are evaluated and compared in terms of CPU time (sec), Cycle count (MCPS), Memory consumption (Ko), Compression Ratio (CR), Signal to Noise Ratio (SNR), Peak Signal to Noise Ratio (PSNR) and Normalized Root Mean Square Error (NRMSE).[...] Read more.
The subject of extracting multiple speech signals from a single mixed recording, which is referred to single channel speech separation, has received considerable attention in recent years and many model-based techniques have been proposed. A major problem of most of these systems is their inability to deal with the situation in which the signals are combined at different levels of energies because they assume that the data used in the test and training phase have equal levels of energies, where, this assumption hardly occurs in reality. Our proposed method based on MIXMAX approximation and sub-section vector quantization (VQ) is an attempt to overcome this limitation. The proposed technique is compared with a technique in which a gain adapted minimum mean square error estimator is derived to estimate the separated signals. Through experiments we show that our proposed method outperforms this method in terms of SNR results and also reduces computational complexity.[...] Read more.
As the sizes of databases are growing exponentially, the optimal design and management of both traditional database management systems as well as processing techniques of data mining are of significant importance. Several approaches are being investigated in this direction. In this paper a novel approach to maintain metadata based on rough sets is proposed and it is observed that with a marginal changes in buffer sizes faster query processing can be achieved.[...] Read more.
Software testing is an important activity in software development life cycle. Testing includes running a program on a set of test cases and comparing seen results with expected results. Automated testing encompasses all automation efforts across software testing lifecycle, with focus on automating system testing efforts and integration. Automated testing brings plenty of benefits that speeding up test running time, increasing accuracy of testing process and minimizing costs in different parts of system are three superior features of it. Maintenance and development of test automation tools are not as easy as traditional testing due to unexplored issues which need more examinations. Automated test patterns have been presented to mitigate some problems happening by automated testing and improve efficiency. This paper aims to investigate into automatic testing and automated test patterns. Also, demonstrates behaviour of applying an automated test pattern on a complex object. Results show during choosing an automated pattern to run, we should consider test structure especially level of test object complexity otherwise inconsistency may happen.[...] Read more.
Composition of services provides value added service by combining existing services and is essential to meet the varying users’ requests. The need for on-demand, automated, on-the fly and failure resilient service composition led to various dynamic and adaptive service composition approaches. An overview of several existing composition approaches is provided and the limitations in these approaches are identified and depicted as research opportunities. It has been found that all these approaches behave in a rigid way to respond to the changing services environment. They are bridged by proposing a Goal-Directed Orchestration approach which employs an orchestration engine to provide flexibility in responding to the changes in dynamic services environment. To illustrate how our approach could work better than the other existing approaches, we discussed with a usage scenario in travel trip planning domain. Our proposed model is compared with the existing models based on a set of defined features.[...] Read more.
Adverse drug reaction (ADR) is widely concerned for public health issue. ADRs are one of most common causes to withdraw some drugs from market. Prescription event monitoring (PEM) is an important approach to detect the adverse drug reactions. The main problem to deal with this method is how to automatically extract the medical events or side effects from high-throughput medical events, which are collected from day to day clinical practice. In this study we propose a novel concept of feature matrix to detect the ADRs. Feature matrix, which is extracted from big medical data from The Health Improvement Network (THIN) database, is created to characterize the medical events for the patients who take drugs. Feature matrix builds the foundation for the irregular and big medical data. Then feature selection methods are performed on feature matrix to detect the significant features. Finally the ADRs can be located based on the significant features. The experiments are carried out on three drugs: Atorvastatin, Alendronate, and Metoclopramide. Major side effects for each drug are detected and better performance is achieved compared to other computerized methods. The detected ADRs are based on computerized methods, further investigation is needed.[...] Read more.