IJITCS Vol. 4, No. 10, Sep. 2012
Cover page and Table of Contents: PDF (size: 380KB)
Multiple Input Multiple Output(MIMO) has been in much importance in recent past because of high capacity gain over a single antenna system. In this article, analysis over the capacity of the MIMO channel systems with spatial channel with modified precoder and decoder has been considered when the channel state information (CSI) is considered partial. Due to delay in acquiring transmitted information at the receiver end, the time selective fading wireless channel often induces incomplete or partial CSI. The dynamic CSI model has also been implemented consisting channel mean and covariance which leads to extracting of channel estimates and error covariance which then further applied with the modified precoder and decoder utilizing both the parameters indicating the CSI quality since these are the functions of temporal correlation factor, and based on this, the model covers data from perfect to statistical CSI, either partially or fully blind. It is found that in case of partial and imperfect CSI, the capacity depends on the statistical properties of the error in the CSI which has been manipulated according to the precoder and decoder conditions.
Based on the knowledge of statistical distribution of the deviations in CSI knowledge, a new approach which maximizes the capacity of spatial channel model with modified precoder and decoder has been tried. The interference then interactively reduced by employing the iterative channel estimation and data detection approach, where by utilizing the detected symbols from the previous iteration, multiuser/MIMO channel estimation and symbol detection is improved.
Location based personalized recommendation has been introduced for the purpose of providing a mobile user with interesting information by distinguishing his preference and location. In most cases, mobile user usually does not provide all attributes of his preference or query. In extreme case, especially when mobile user is moving, he even does not provide any preference or query. Meanwhile, the recommendation system database also does not contain all attributes that can express what the user needs. In this paper, we design an effective location based recommendation system to provide the most possible interesting places to a user when he is moving, according to his implicit preference and physical moving location without the user’s providing his preference or query explicitly. We proposed two circle concepts, physical position circle that represents spatial area around the user and virtual preference circle that is a non-spatial area related to user’s interests. Those skyline query places in physical position circle which also match mobile user’s implicit preference in virtual preference circle will be recommended. User’s implicit preference will be estimated under language modeling framework according to user’s historical visiting behaviors. Experiments show that our method is effective in recommending interesting places to mobile users. The main contribution of the paper comes from the combination of using skyline query and information retrieval to do an implicit location-based personalized recommendation without user’s providing explicit preference or query.[...] Read more.
Feature contribution means that what features actually participates more in grouping data patterns that maximizes the system’s ability to classify object instances. In this paper, modified K-means fast learning artificial neural network (K-FLANN) was used to cluster multidimensional data. The operation of neural network depends on two parameters namely tolerance (δ) and vigilance (ρ). By setting the vigilance parameter, it is possible to extract significant attributes from an array of input attributes and thus determine the principal features that contribute to the particular output. Exhaustive search and Heuristic search techniques are applied to determine the features that contribute to cluster data. Experiments are conducted to predict the network's ability to extract important factors in the presented test data and comparisons are made between two search methods.[...] Read more.
Predicting human behavior based on the usage of text on social networking sites can be a challenging area of interest to a particular community. Text mining being a major interest in Data Mining has vast applications in various fields. Clients can assess an individual’s behavior using the proposed framework that is based on person’s textual interaction with other people. In this paper, a framework is proposed for predicting human behavior in three phases- Text Extraction, Text cleaning and Text Analysis. For cleaning text, all the stop words have been removed and then the text is utilized for further processing. Then, the terms from the text are clustered based on semantic similarity and then gets associated with respective physiological parameters that identify a human behavior. This application is best suited for the fields of Criminal Sciences, Medical Sciences, Human Resource Department and Political Science and even for Matrimonial purposes. The proposed framework is applied on some famous world known celebrities and the results are quite encouraging.[...] Read more.
Imaging in the presence of subject motion has been an ongoing challenge for magnetic resonance imaging (MRI). In this paper some of the important issues regarding the acquisition and reconstruction of anatomical and DTI imaging of moving subjects are addressed; methods to achieve high resolution and high Signal to Noise Ratio (SNR) volume data. Excellent fetal brain 3D Apparent Diffusion Coefficient maps in high resolution have been achieved for the first time as well as promising Fractional Anisotropy maps. Growth curves for the normally developing fetal brain have been devised by the quantification of cerebral and cerebellar volumes as well as someone dimensional measurements. A Verhulst model is to describe these growth curves, and this approach has achieved a correlation over 0.99 between the fitted model and actual data.[...] Read more.
Present day application requires various version kinds of images and pictures as sources of information for interpretation and analysis. Whenever an image is converted from one form to another, such as, digitizing, scanning, transmitting, storing, etc. Some form of degradation occurs at the output. Hence, the output image has to undergo a process called image enhancement which consist of a collection of techniques that seeks to improve the visual appearances of an image. Image enhancement technique is basically improving the perception of information in images for human viewers and providing 'better' input for other automated image processing techniques. This thesis presents a new approach for image enhancement with fuzzy interface system. Fuzzy techniques can manage the vagueness and ambiguity efficiently (an image can be represented as fuzzy set). Fuzzy logic is a powerful tool to represent and process human knowledge in form of fuzzy if-then rules. Compared to other filtering techniques, fuzzy filter gives the better performance and is able to represent knowledge in a comprehensible way.[...] Read more.
Recently, the demand of “Green Computing”, which represents an environmentally responsible way of reducing power consumption, and involves various environmental issues such as waste management and greenhouse gases is increasing explosively. We have laid great emphasis on the need to minimize power consumption and heat dissipation by computer systems, as well as the requirement for changing the current power scheme options in their operating systems (OS). In this paper, we have provided a comprehensive technical review of the existing, though challenging, work on minimizing power consumption by computer systems, by utilizing various approaches, and emphasized on the software approach by making use of dynamic power management as it is used by most of the OSs in their power scheme configurations, seeking a better understanding of the power management schemes and current issues, and future directions in this field. Herein, we review the various approaches and techniques, including hardware, software, the central processing unit (CPU) usage and algorithmic approaches for power economy. On the basis of analysis and observations, we found that this area still requires a lot of work, and needs to be focused towards some new intelligent approaches so that human inactivity periods for computer systems could be reduced intelligently.[...] Read more.
The process scheduling, is one of the most important tasks of the operating system. One of the most common scheduling algorithms used by the most operating systems is the Round Robin method in which, the ready processes waiting in ready queue, seize the processor for a short period of time known as the quantum (or time slice) circularly. In this paper, a non-linear programming mathematical model is developed to determine the optimum value of the time quantum, in order to minimize the average waiting time of the processes. The model is implemented and solved by Lingo 8.0 software on four selected problems from the literature.[...] Read more.
Cloud computing is recently a booming area and has been emerging as a commercial reality in the information technology domain. Cloud computing represents supplement, consumption and delivery model for IT services that are based on internet on pay as per usage basis. The scheduling of the cloud services to the consumers by service providers influences the cost benefit of this computing paradigm. In such a scenario, Tasks should be scheduled efficiently such that the execution cost and time can be reduced. In this paper, we proposed a meta-heuristic based scheduling, which minimizes execution time and execution cost as well. An improved genetic algorithm is developed by merging two existing scheduling algorithms for scheduling tasks taking into consideration their computational complexity and computing capacity of processing elements. Experimental results show that, under the heavy loads, the proposed algorithm exhibits a good performance.[...] Read more.
Many language-sensitive tools for detecting plagiarism in natural language documents have been developed, particularly for English. Language-independent tools exist as well, but are considered restrictive as they usually do not take into account specific language features. Detecting plagiarism in Arabic documents is particularly a challenging task because of the complex linguistic structure of Arabic. In this paper, we present a plagiarism detection tool for comparison of Arabic documents to identify potential similarities. The tool is based on a new comparison algorithm that uses heuristics to compare suspect documents at different hierarchical levels to avoid unnecessary comparisons. We evaluate its performance in terms of precision and recall on a large data set of Arabic documents, and show its capability in identifying direct and sophisticated copying, such as sentence reordering and synonym substitution. We also demonstrate its advantages over other plagiarism detection tools, including Turnitin, the well-known language-independent tool.[...] Read more.