Emmanuel Ntaye

Work place: School of Computer Science and Communication Engineering, Jiangsu University, 301 Xuefu Road, Zhenjiang, 212013, China

E-mail: entaye@ujs.edu.cn

Website: https://orcid.org/0000-0001-6035-3834

Research Interests:

Biography

Emmanuel Ntaye is currently pursuing the Ph.D. degree in Computer Science and Technology at Jiangsu University, China. He hold an M.Phil. in Computer Science from Kwame Nkrumah University of Science and Technology, Ghana, and a B.Sc. in Computer Science from the University for Development Studies, Ghana. He also holds a B.Ed. in Mathematics from the University of Education, Winneba, and a Diploma in Teacher Education from the University of Cape Coast. His research focuses on machine learning, multi-label classification, and low-rank subspace learning. He has authored several publications in reputable journals, including multiple first-author papers in Applied Intelligence. Ntaye is a member of the Association for Computing Machinery (ACM) and the Internet Society of Ghana.

Author Articles
Robust Low-Rank Subspace Learning for Multi-Label Feature Selection with Global-Local Correlation Modeling

By Emmanuel Ntaye Xiang-Jun Shen Andrew Azaabanye Bayor Fadilul-lah Yassaanah Issahaku

DOI: https://doi.org/10.5815/ijem.2026.01.01, Pub. Date: 8 Feb. 2026

Multi-label classification faces significant challenges from high-dimensional features and complex label dependencies. Traditional feature selection methods often fail to capture these dependencies effectively or suffer from high computational costs. This paper proposes a novel Robust Low-Rank Subspace Learning (RLRSL) framework for multi-label feature selection. Our method integrates global label correlations and local feature structures within a unified objective function, utilizing Schatten-p norm for low-rank subspace learning, l_(2,1),-norm for joint feature sparsity, and manifold regularization for local geometry preservation. We develop an efficient optimization algorithm to solve the resulting non-convex problem. Comprehensive experiments on seven benchmark datasets demonstrate that RLRSL consistently outperforms state-of-the-art methods across multiple evaluation metrics, including ranking loss, multi-label accuracy, and F1-score, using both ML-*k* NN and SVM classifiers. The results confirm the robustness, efficiency, and superior generalization capability of our proposed approach

[...] Read more.
Other Articles