IJITCS Vol. 10, No. 3, Mar. 2018
Cover page and Table of Contents: PDF (size: 318KB)
This study presents the results of a content analysis of 2476 tweets posted by Hillary Clinton and Donald Trump during the 2016 presidential election following their official nomination by their respective political parties. The study sought to determine whether the candidates used a focused campaign strategy in their tweets, and whether the tweets revealed priorities based on their focus and the time of the day they were posted. The results show that Clinton posted more tweets, had a more focused campaign than Trump during the same time frame.[...] Read more.
In this paper, the accuracy of the entropy-based thresholding approaches in brain tumor detection framework is investigated. Entropies are information gain methods that have been used for image thresholding with various application and different image modalities. The accuracy of the existing entropies for image thresholding has been studied in general domain (e.g.: natural images) and were not compared thoroughly. Thus, a framework for brain tumor segmentation is proposed with the core process of the image thresholding, in order to evaluate the accuracy of the entropies. Five entropies, namely, Renyi, Maximum, Minimum, Tsallis and Kapur are evaluated. Moreover, the aggregation of entropies was implemented and evaluated. The results show that the maximum entropy is the best for brain tumor detection. Moreover, it was shown that aggregation of entropies output does not enhance the result, however, it works as automatic selection of the best result and produces the results with the highest accuracy.[...] Read more.
Expert systems that bring facts and valuable experiences together and make some deductions possible. Expression of relevant knowledge and experience in these structures may be in a set of rules. Learning problems are valid with expert systems. Therefore, they cannot add new rules and information automatically by themselves. Rules are created by human experts on the way and added upon the system. Classification datasets are collections of data commonly used in machine learning that contain and classify the previously obtained experiences. In this study, rules were obtained by using Part, NNge, Prism rule classifier algorithms, and a knowledge base of expert systems was systematically created to achieve enrichment. Enrichment and rule deduction process needs careful and sensitive attention. A combined methodology and study was revealed during this sensitive process. In this context, studies were conducted on five widely used datasets. It was aimed to reduce the redundant, conflicting, subsumed and circular rules in order to create a consistent and complete knowledge base. In this way, a methodology was developed to establish more powerful and richer contents of knowledge base that have higher quality.[...] Read more.
Muscles in a human body consists of a pair. Musculoskeletal imbalances caused by repetitive usage of one part of the muscle in this pair and incorrect posture a human body takes on regular basis lead to severe injuries in terms of neuro musculoskeletal problems, hamstring strains, lower back tightness, repetitive stress injuries, altered movement patterns, postural dysfunctions, trapped nerves and etc. and both neurological and physical performances are severely affected when time progresses. In clinical domain, muscle imbalances are determined by gait and posture analysis, Movement analysis, Joint range of motion analysis and muscle length analysis which require expertise knowledge and experience. X-Rays and CT scans in the medical domain also require domain experts to interpret the results of a checkup. Kinect is a motion capturing device which is able to track human skeleton, its joints and body movements within its sensory range. The purpose of this research is to provide a mechanism to identify muscle imbalances based on gait analysis tracked via Kinect motion capture device by differentiate the deviation of healthy person’s gait patterns. Primarily, the outcome of this study will be a self-identification method of human skeletal imbalance.[...] Read more.
Millions of companies expend billions of dollars on trillions of software for the development and maintenance. Still many projects result in failure causing heavy financial loss. Major reason is the inefficient effort estimation techniques which are not so suitable for the current development methods. The continuous change in the software development technology makes effort estimation more challenging. Till date, no estimation method has been found full-proof to accurately pre-compute the time, money, effort (man-hours) and other resources required to successfully complete the project resulting either over-estimated budget or under-estimated budget. Here a machine learning COCOMO is proposed which is a novel non-algorithmic approach to effort estimation. This estimation technique performs well within their pre-specified domains and beyond so. As development methods have undergone revolutionaries but estimation techniques are not so modified to cope up with the modern development skills, so the need of training the models to work with updated development methods is being satiated just by finding out the patterns and associations among the domain specific data sets via neural networks along with carriage of desired COCOMO features. This paper estimates the effort by training proposed neural network using already published data-set and later on, the testing is done. The validation clearly shows that the performance of algorithmic method is improved by the proposed machine learning method.[...] Read more.
Ensuring security in web based applications is one of the key issues nowadays. The processes of designing and building a web site have changed. As the online transactions are increasing, increase in type and number of attacks have been observed regarding security of online payment systems. Generally used web development methodologies do not assure security as an umbrella activity. Moreover appropriate threat modeling is also not being conducted against web security objectives. Need of the hour is to have a comprehensive and simple to use web development methodology which caters security throughout the WDLC for web based solutions.[...] Read more.
In this article, representativity between two multidimensional acoustical spaces of vowel has been formulated based on the geometric mean of correlation of average directional vector, variance-covariance matrices, and Mahalanobis distance. Generally, the multidimensional spaces formed by different combinations of acoustical features of vowel are considered as the vowel perceptual spaces. Therefore, ten bangla vowel-sounds (/অ/ [/a/], /আ/ [/ã/], / ই/ [/i/] , /ঈ/ [/ĩ/], /উ/ [/u/], / ঊ/ /ũ/, /এ/ [/e/], /ঐ/ [/ai/] , /ও/ [/o/] and /ঔ/ [/au/]) are collected from each native Bengali speaker to build the perceptual space of the speaker using the acoustical features of vowels. Similarly, total nine perceptual spaces are constructed from nine speakers and these are utilized to evaluate representativity. Using the proposed method, representativities of differently constructed perceptual spaces have been evaluated and compared numerically. Furthermore, dominating and representative acoustical features are also identified from the principal components of the perceptual spaces.[...] Read more.
This paper mainly focuses on the development of quantitative approach based algorithm for comparing the social networks. Firstly, comparison of social networks can be done on different parameters at all the three levels – network, group and node level characteristics. Secondly, for getting more accurate results, the paper has incorporated weights to these parameters according to their importance. For addressing these two, the paper has taken an advantage from the Ordered Weighted Averaging (OWA) operator in the proposed algorithm. This algorithm outputs one quantitative value for each of the social network, on which the comparison has to be made. This paper has also employed the Gephi tool, in order to accomplish the quantitative and graphical comparison between the social networks. The analysis has been done on multiple varied social network data sets. This paper has made an effort to analyze, which among them is better in terms of connectivity and coherency factors. The paper takes into account six vital metrics of the social networks so that there will be low complexity with high accuracy. They are average degree, network diameter, graph density, modularity, clustering coefficient and average path length. The proposed SNA approach is very advantageous for finding the potential group suited for a particular task in different areas like identification of criminal activities, and more fields like economics, cyber security, medicine etc.[...] Read more.