Work place: Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
Research Interests: Natural Language Processing, Natural Language Generation, Evolutionary Computation
Imtiaz Hussain Khan received his Masters (Computer Science) and PhD (Artificial Intelligence) degrees from the University of Essex UK and University of Aberdeen UK, in 2005 and 2010, respectively. In September 2010, he joined the Department of Computer Science at King Abdulaziz University, Jeddah, Kingdom of Saudi Arabia, as an Assistant Professor.
His areas of research are Natural Language Processing (NLP), particularly Natural Language Generation (NLG), and Evolutionary Computation. He has published a reasonable number of articles in well-reputed journals and conferences, including TopiCS in Cognitive Science, Association for Computational Linguistics (ACL), and COLING. Presently, he is also working as a co-investigator on a King Abdulaziz City of Science and Technology (KACST) funded project: Building a Plagiarism Detection System for Arabic.
DOI: https://doi.org/10.5815/ijmecs.2019.11.01, Pub. Date: 8 Nov. 2019
Assessment and evaluation of Program Educational Objectives (PEOs) and Student Outcomes (SOs) is a challenging task. In this paper, we present a unified framework, which has been developed over a period of more than eight years, for the systematic assessment and evaluation of PEOs and SOs. The proposed framework is based on a balance sampling approach that thoroughly covers PEO/SO assessment and evaluation and also minimizes human effort. This framework is general but to prove its effectiveness, we present a case study where this framework is successfully adopted by our undergraduate computer science program in the department of computer science at King Abdulaziz University, Jeddah. The robustness of the proposed framework is ascertained by an independent evaluation by ABET who awarded us full six years accreditation without any comments or concerns. The most significant value of our proposed framework is that it provides a balanced sampling mechanism for assessment and evaluations of PEOs/SOs that can be adapted by any program seeking ABET accreditation.[...] Read more.
DOI: https://doi.org/10.5815/ijieeb.2019.06.03, Pub. Date: 8 Nov. 2019
Sentiment analysis is an application of artificial intelligence that determines the sentiment associated sentiment with a piece of text. It provides an easy alternative to a brand or company to receive customers' opinions about its products through user generated contents such as social media posts. Training a machine learning model for sentiment analysis requires the availability of resources such as labeled corpora and sentiment lexicons. While such resources are easily available for English, it is hard to find them for other languages such as Arabic. The aim of this research is to build an Arabic sentiment lexicon using a corpus-based approach. Sentiment scores were propagated from a small, manually labeled, seed list to other terms in a term co-occurrence graph. To achieve this, we proposed a graph propagation algorithm and compared different similarity measures. The lexicon was evaluated using a manually annotated list of terms. The use of similarity measures depends on the fact that the words that are appearing in the same context will have similar polarity. The main contribution of the work comes from the empirical evaluation of different similarity to assign the best sentiment scores to terms in the co-occurrence graph.[...] Read more.
DOI: https://doi.org/10.5815/ijieeb.2017.04.03, Pub. Date: 8 Jul. 2017
This article describes IReadWeb system, which is based on existing WebAnyWhere technology. The existing WebAnyWhere system uses depth-first search (DFS) to traverse the Document Object Model (DOM) during the Web surfing task. DFS uses an exhaustive search and crawls through an entire page until it identifies the target node thereby greatly increasing the response time to users. We developed a user-experienced based algorithm, which, unlike DFS, exploits pre-fetched information stored in a local cache to speed up the browsing task. The performance of IReadWeb is thoroughly evaluated and compared against WebAnyWhere by using a sizeable sample of blind native Arabic speakers. The experimental results show that IReadWeb outperformed WebAnyWhere in attaining fast response speed.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2015.11.03, Pub. Date: 8 Oct. 2015
Many crossover operators have been proposed in literature on evolutionary algorithms, however, it is still unclear which crossover operator works best for a given optimization problem. In this study, eight different crossover operators specially designed for travelling salesman problem, namely, Two-Point Crossover, Partially Mapped Crossover, Cycle Crossover, Shuffle Crossover, Edge Recombination Crossover, Uniform Order-based Crossover, Sub-tour Exchange Crossover, and Sequential Constructive Crossover are evaluated empirically. The select crossover operators were implemented to build an experimental setup upon which simulations were run. Four benchmark instances of travelling salesman problem, two symmetric (ST70 and TSP225) and two asymmetric (FTV100 and FTV170), were used to thoroughly assess the select crossover operators. The performance of these operators was analyzed in terms of solution quality and computational cost. It was found that Sequential Constructive Crossover outperformed other operators in attaining 'good' quality solution, whereas Two-Point Crossover outperformed other operators in terms of computational cost. It was also observed that the performance of different crossover operators is much better for relatively small number of cities, both in terms of solution quality and computational cost, however, for relatively large number of cities their performance greatly degrades.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2015.01.07, Pub. Date: 8 Dec. 2014
This article describes an ongoing research which intends to develop a plagiarism detection system for Arabic documents. We developed different heuristics to generate effective queries for document retrieval from the Web. The performance of those heuristics was empirically evaluated against a sizeable corpus in terms of precision, recall and f-measure. We found that a systematic combination of different heuristics greatly improves the performance of the document retrieval system.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2013.08.04, Pub. Date: 8 Jul. 2013
Most existing algorithms for the Generation of Referring Expressions (GRE) tend to produce distinguishing descriptions at the semantic level, disregarding the ways in which surface issues (e.g. linguistic ambiguity) can affect their quality. In this article, we highlight limitations in an existing GRE algorithm that takes lexical ambiguity into account, and put forward some ideas to address those limitations. The proposed ideas are implemented in a GRE algorithm. We show that the revised algorithm successfully generates optimal referring expressions without greatly increasing the computational complexity of the (original) algorithm.[...] Read more.
Subscribe to receive issue release notifications and newsletters from MECS Press journals