Tauseef Khan

Work place: School of Computer Science & Engineering, VIT-AP University, Amaravati, 522237, Andhra Pradesh, India

E-mail: tauseef.khan@vitap.ac.in

Website:

Research Interests: Pattern Recognition, Machine Learning, Computer Vision

Biography

Tauseef Khan has received the Ph.D. degree in Computer Science and Engineering from Aliah University, Kolkata in 2022. He completed Master of Technology (Gold) in Information Technology from Maulana Abul Kalam Azad University of Technology (MAKAUT, formerly WBUT) in 2013. He is currently working as an Assistant Professor in the School of Computer Science and Engineering, VIT-AP University, Amaravati, Andhra Pradesh. His research interest includes computer vision, digital image processing, machine learning, pattern recognition, etc. He has published several articles in reputed journals and conferences on scene text detection, image classification, script identification, etc. He is a member of IEEE and life member of ISTE.

Author Articles
E-Chars74k: An Extended Scene Character Dataset with Augmentation Insights and Benchmarks

By Payel Sengupta Tauseef Khan Ayatullah Faruk Mollah

DOI: https://doi.org/10.5815/ijigsp.2025.06.08, Pub. Date: 8 Dec. 2025

Semantic understanding of camera-captured scene text images is an important problem in computer vision. Scene character recognition is the pivotal task in this problem, and deep learning is now-a-days the most prospective approach. However, limited sample-size of scene character datasets appear to be a major hindrance for training deep networks. In this paper, we present (i) various augmentation techniques for increasing the sample size of such datasets along with associated insights, (ii) an extended version of the popular Chars74k dataset (herein referred to as E-Chars74k), and (iii) the benchmark performance on the developed E-Chars74k dataset. Experiments on various sets of data such as digits, alphabets and their combination, belonging to the usual as well as wild scenarios, clearly reflect significant performance gain (20%-30% increase in scene character recognition accuracy). It is noteworthy to mention that in all these experiments, a deep convolutional neural network powered with two conv-pool pairs is trained with the uniform training test partition to foster comparison on equal bench.

[...] Read more.
When Handcrafted Features Meet Deep Features: An Empirical Study on Component-Level Image Classification

By Tauseef Khan Ayatullah Faruk Mollah

DOI: https://doi.org/10.5815/ijigsp.2024.01.05, Pub. Date: 8 Feb. 2024

Scene text detection from natural images has been a prime focus from last few decades. Classification of foreground object components is an essential task in many scene text detection approaches under uncontrollable environment. As it heavily relies upon robust and discriminating features, several features have been engineered for component-level text non-text classification. Competency of such feature descriptors particularly in respect of deep features needs to be examined. In this paper, we present prospective feature descriptors applicable to component-level text non-text classification and examine their performance along with convolutional neural network based deep features. Series of experiments have been carried out on publicly available benchmark dataset(s) of multi-script document-type, scene-type, and combined text vs. non-text components. Interestingly, feature combination is found to put well-demonstrated deep features into tough competition on most datasets under consideration. For instance, on the combined text non-text classification problem, CNN based deep features yield 97.6%, whereas aggregated features produce an accuracy of 98.4%. Similar findings are obtained on other experiments as well. Along with the quantitative figures, results have been analyzed and insightful discussion is made to ascertain the conjectures drawn herein. This study may cater the need of leveraging potentially strong handcrafted feature descriptors.

[...] Read more.
Other Articles