IJEME Vol. 8, No. 4, Jul. 2018
Cover page and Table of Contents: PDF (size: 668KB)
Customer service is an important area in the success of a system or a service. For services that have a relatively large customer base, the efficiency with which complaints are attended to becomes an issue. The Computer Centre of the Obafemi Awolowo University attends to students with various complaints majorly in relation to their e-portal accounts. Although efforts are in place to manage the crowd, there is still a major need for the complaint management service to save time and energy. The need for a system that can handle the enormous request and complaints of the undergraduate students of the institution is the thesis of this work. Design and implementation was done using the range of tools provided by the Microsoft Bot Framework. C# Programming language was used to implement the decision algorithm. Online web services were used to handle natural language understanding and the Bot Connector to implement the Web Canvas. Microsoft Azure Service was used to host the web after which evaluations were drawn through surveys. Thus, this study projected an easier flow of operations involving logging of complaints by students.[...] Read more.
Due to the current encroachments in technology and also sharp lessening of storage cost, huge extents of documents are being put away in repositories for future references. At the same time, it is time consuming as well as costly to recover the user intrigued documents, out of these gigantic accumulations. Searching of documents can be made more efficient and effective if documents are clustered on the premise of their contents. This article uncovers a comprehensive discussion on various clustering algorithm used in text mining alongside their merits, demerits and comparisons. Further, author has likewise examined the key challenges of clustering algorithms being used for effective clustering of documents.[...] Read more.
The article presents issues of the establishment of National Terminological Information System as one of the development perspectives of national terminology. The current state and problems in terminology sphere in Azerbaijan are explored and the necessity of system establishment is justified. Ongoing research works in computational terminology field within the system are mentioned and several conceptual approaches are presented. International practice and standards in this direction are investigated and analyzed. Conceptual foundations and architecture-technological principles of the National Terminological Information System are developed. Primary functions of web-portal developed within the system framework are presented. As a result of the study on the system establishment, the future prospects of the National Terminological Information System have been identified. The article also stresses the expected outcomes as a result of the implementation of National Terminological Information System.[...] Read more.
Domain-specific retrieval systems developed for a homogenous group of users can potentially optimise the recommendation of relevant web documents in minimal time as compared to generic systems built for a heterogeneous group of users. Domain-specific retrieval systems are normally developed by learning from users’ past interactions, as a group or individual, with an information system. This paper focuses on the recommendation of relevant web documents to a cohort of users based on their search behaviour. Simulated task situations were used to group users of the same domain. The motivation behind this work is to help a cohort of users find relevant documents that will satisfy their information needs effectively. An aggregated implicit predictive model derived from correlating implicit and explicit feedback parameters was integrated with the traditional term frequency/inverse document frequency (tf-idf) algorithm to improve the relevancy of retrieval results. The aggregated model system was evaluated in terms of recall and precision (Mean Average Precision) by comparing it with self-designed retrieval system and a generic system. The performance of the three systems was measured based on the relevant documents returned. The results showed that the aggregated domain-specific system performed better in returning relevant documents as compared to the other two systems.[...] Read more.
Security is an important factor of life, which varies from home network to all aspects of our life. Data security is one of the challenges research areas where we can’t provide 100% security for any data and network. Initially the data were secured with Passwords and extended up to biometrics, eventually attackers also becoming stronger to heal the data. In this paper critical review has been done starting from the passwords for securing data to various biometric methods. By using modern tool the data are securing using biometric methods.[...] Read more.
The 21st century evolved with tsunami of data generation by the human civilization that has delivered new words like Big Data to the world of vocabulary. Digitization process has almost overtaken all the major sectors and it has played a pivotal role of dominance as for as virtual digital world is concerned. This in turn has landed us in most debated term “Big Data” in the present decade. Big Data has made the traditional relational databases (RDMS) handicapped in terms of their huge size and speed of its creation. The hunger to manage and process this gigantic complex heterogeneous data, has again followed the age old rule of “Necessity is the mother of Invention”, and came up with idea of HadoopMapReduce for the same. The given work uses K-Means clustering algorithm on a benchmark MRI dataset from OASIS database, in order to cluster the data based upon their visual similarity, using WEKA. Until a threshold size it worked out and after that compelled WEKA to prompt an emergency message “out of memory” on display. A Map/Reduce version of K-means is implemented on top of Hadoop using R, so as to cure this problem. The given algorithm is evaluated using Speedup, Scale up and Size up parameters and it neatly performed better as the size of the input data gets increased.[...] Read more.