IJITCS Vol. 9, No. 8, Aug. 2017
Cover page and Table of Contents: PDF (size: 256KB)
Information systems need to be more flexible and to allow users to find content related to their context and interests. Metadata harvesting and metadata enrichments could represent a way to help users to find content and events according to their interests. However, metadata are underused and represents an interoperability challenge. This paper presents a new framework, called SMESE, and the implementation of its prototypes that consists of its semantic metadata model, a mapping ontology model and a user interest affinity model. This proposed framework makes these models interoperable with existing metadata models.
SMESE also proposes a decision support process supporting the activation and deactivation of software features related to metadata. To consider context variability into account in modeling context-aware properties, SMESE makes use of an autonomous process that exploits context information to adapt software behavior using an enhanced metadata framework. When the user chooses preferences in terms of system behavior, the semantic weight of each feature is computed. This weight quantifies the importance of the feature for the user according to their interests.
This paper also proposed a semantic metadata analysis ecosystem to support data harvesting according to a metadata model and a mapping ontology model. Data harvesting is coupled with internal and external enrichments. The initial SMESE prototype represents more than 400 millions of relationships (triplets). To conclude, this paper also presents the design and implementation of different prototypes of SMESE applied to digital ecosystems.
Long Term Evolution (LTE) is the latest 3GPP (3rd Generation Partnership Project) standard of the Mobile communication system. LTE has been proposed to achieve higher data throughput, lower latency and better quality of service (QoS). In LTE network (n/w), the resource (resource referred to frequency and time domain on the air interface) sharing is one of the major challenging issues and it is one of the key functions to achieve the desired QoS for the different configured data stream. In this context of QoS for LTE n/w, Multiplexing & Logical Channel Prioritization (LCP) is referred. In this paper, we present a brief survey on LCP techniques for the uplink (UL) direction. UL direction referred mobile to n/w communication. A strategy has been proposed for LCP to achieve QoS under burst resource allocation environment. Proposed approach considers the priority of each logical channel configured by evolved NodeB (eNB) during radio bearer (RB) setup/re-configuration.[...] Read more.
In this paper, the author provides a framework for Multilevel Expert System to advice scholars for their future career. The proposed framework aims at providing information to decide the career paths for the academics. The emerging fields of Expert System, Education, and Data Mining are speedily providing new possibilities for collecting, analyzing and guiding the scholars in their careers. Many scholars suffer from taking a right career decision, only a few scholars took the right decision about their careers. A poor career decision of scholars may push his whole life in the dark. Nowadays selecting a right career becomes very difficult for the scholars. Among the works reported in this field, we concentrate only Experts Systems that deal with scholar's career selection problem through Data Mining technique.[...] Read more.
Nowadays web service privacy gets high attention especially in the fields of finance and medical. Privacy preserves access rights to personally identifiable information. Different models have been proposed for enforcing privacy in web service environment. Getting a privacy level for protecting data transferred between consumer and provider in a web service environment is still a problem. Negotiation helps participants to get a privacy level. This paper extends web service security negotiation framework in a multilateral web service environment for negotiating privacy. A repaired genetic negotiation framework used to conduct the privacy negotiation. In privacy negotiation, the negotiation communication structure uses a broker for negotiation; where each participant sends its attributes to the broker. Negotiation using this communication structure decreases the number of messages transferred so less execution time. The genetic-based Negotiation is compared to traditional time-based negotiation. Through experimental results, genetic based negotiation outperforms traditional time-based negotiation.[...] Read more.
Recent worldwide medical research is focusing on new intelligence approaches for diagnosis of various infections. The sporadic occurrence of malaria diseases in human has pushed the need to develop computational approaches for its diagnoses. Most existing conventional malaria models for classification problems examine the dynamics of asymptomatic and morphological characteristics of malaria parasite in the thick blood smear, but this study examine the symptomatic characteristics of malaria parasite combined with effects of climatic factor which is a great determinant of malaria severity. The need to predict the occurrence of malaria disease and its outbreak will be helpful to take appropriate actions by individuals, World Health Organizations and Government Agencies and its devastating impact will be reduced. This paper proposed Feed-Forward Back-Propagation (FF_BP) Neural Network model to determine the rate of malaria transmission. Monthly averages of climatic factors; rainfall, temperature and relative humidity with monthly malaria incidences were used as input variables. An optimum threshold value of 0.7100 with classification accuracy 87.56%, sensitivity 96.67% and specificity 76.67% and mean square error of 0.100 were achieved. While, the model malaria threat detection rate was 87.56%, positive predictive value was 89.23%, negative predictive value was 92.00% and the standard deviation is 2.533. The statistical analysis of Feed-Forward Back-Propagation Neural Network model was conducted and its results were compared with other existing models to check its robustness and viability.[...] Read more.
Recommender system suggests users with options that may be of use to them or may be of their interest or liking. These days recommender systems are used widely on most systems and especially on those which are connected to World Wide Web, it may be a mobile app, a desktop application, or a website. Most advertisements on these systems are focused on targeting a specific group. Recommender systems provide a solution to such a scenario where the recommendations need to be targeted based on a user profile. Almost all commercial, collaborative or even social networking websites rely on recommender systems. In this paper, we specifically focus on GitHub, a source code hosting site and one of the most popular platforms for online collaborative coding and sharing. GitHub offers an opportunity for researchers to perform analysis by providing REST-based APIs for downloading its data. GitHub hosts a vast amount of user repositories so it is quite difficult for a GitHub user to decide to which repository she should contribute on GitHub. So, our paper aims to review different approaches that can be used for creating a recommender system for GitHub, to provide personalized suggestions to GitHub users to which repositories they should contribute. In this paper, we have discussed collaborative filtering, content-based filtering, and hybrid filtering, knowledge-based and utility-based approaches of a recommender system.[...] Read more.
Data mining is a procedure of mining or obtaining a pertinent volume of data or information making the data available for understanding and processing. Data analysis is a common method across various areas like computer science, biology, telecommunication industry and retail industry. Data mining encompass various algorithms viz. association rule mining, classification algorithm, clustering algorithms. This survey concentrates on clustering algorithms and their comparison using WEKA tool. Clustering is the splitting of a large dataset into clusters or groups following two criteria ie. High intra-class similarity and low inter-class similarity. Every cluster or group must contain one data item and every data item must be in one cluster. Clustering is an unsupervised technique that is fairly applicable on large datasets with a large number of attributes. It is a data modelling technique that gives a concise view of data. This survey tends to explain all the clustering algorithms and their variant analysis using WEKA tool on various datasets.[...] Read more.
The alignment between information systems strategy and business strategy as an important driver of competitiveness has predominantly been assessed through quantitative methods without exploring in detail factors that influence strategic alignment and their implications for perceived business performance. Most IS-business strategic alignment studies dominantly focus on entire business and IS strategies resulting in many findings that are too general and inconclusive. This study aims at assessing strategic alignment factors, their interrelationships, and how they influence the IS-business alignment and its consequential effect on the performance of six universal banks in Ghana. The study followed the systematic procedure of grounded theory design and adopted qualitative dominant crossover mixed analysis. The findings of the study indicate strategic alignment between information systems strategy and technological innovation impacts positively on the performance of a firm where IT utilization complements resources and capabilities.[...] Read more.