Remote Sensing Textual Image Classification based on Ensemble Learning

Full Text (PDF, 621KB), PP.21-29

Views: 0 Downloads: 0


Zhiwei Ye 1,* Yang Juan 1 Zhang Xu 1 Zhengbing Hu 2

1. School of Computer Science, Hubei university of Technology, Wuhan, China

2. School of Educational Information Technology, Central China Normal University, Wuhan, China

* Corresponding author.


Received: 9 Aug. 2016 / Revised: 12 Sep. 2016 / Accepted: 27 Oct. 2016 / Published: 8 Dec. 2016

Index Terms

Remote Sensing, Textual Image Classification, Ensemble Learning, Bagging


Remote sensing textual image classification technology has been the hottest topic in the filed of remote sensing. Texture is the most helpful symbol for image classification. In common, there are complex terrain types and multiple texture features are extracted for classification, in addition; there is noise in the remote sensing images and the single classifier is hard to obtain the optimal classification results. Integration of multiple classifiers is able to make good use of the characteristics of different classifiers and improve the classification accuracy in the largest extent. In the paper, based on the diversity measurement of the base classifiers, J48 classifier, IBk classifier, sequential minimal optimization (SMO) classifier, Naive Bayes classifier and multilayer perceptron (MLP) classifier are selected for ensemble learning. In order to evaluate the influence of our proposed method, our approach is compared with the five base classifiers through calculating the average classification accuracy. Experiments on five UCI data sets and remote sensing image data sets are performed to testify the effectiveness of the proposed method. 

Cite This Paper

Ye zhiwei, Yang Juan, Zhang Xu, Hu Zhengbing,"Remote Sensing Textual Image Classification based on Ensemble Learning", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.8, No.12, pp.21-29, 2016. DOI: 10.5815/ijigsp.2016.12.03


[1]Ghassemian H. A review of remote sensing image fusion methods[J]. Information Fusion, 2016, 32(PA):75-89.

[2]Tsai C F. Image mining by spectral features: A case study of scenery image classification[J]. Expert Systems with Applications, 2007, 32(1):135-142.

[3]Goel S, Gaur M, Jain E. Nature Inspired Algorithms in Remote Sensing Image Classification[J]. Procedia Computer Science, 2015, 57:377-384.

[4]Xu M, Zhang L, Du B. An Image-Based Endmember Bundle Extraction Algorithm Using Both Spatial and Spectral Information[J]. IEEE Journal of Selected Topics in Applied Earth Observations & Remote Sensing, 2015, 8(6):2607-2617.

[5]Rutherford V. Platt, Lauren Rapoza. An Evaluation of an Object-Oriented Paradigm for Land Use/Land Cover Classification[J]. Professional Geographer, 2008, 60(1):87-100.

[6]Wolpert, D H. The supervised learning no-free-lunch theorem [C]. Proceedings of the 6th Online World Conference on Soft Computing in Industrial Applications, 2001.

[7]Kittler J, Hatef M, Duin R P W, et al. On combining classifiers[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 1998, 20(3):226-239.

[8]Doan H T, Foody G M. Increasing soft classification accuracy through the use of an ensemble of classifiers [J]. International Journal of Remote Sensing, 2007, 28(20): 4606-4623

[9]Hansen L K, Salamon P. Neural network ensembles[J]. Pattern Analysis & Machine Intelligence IEEE Transactions on, 1990, 12(10):993-1001.

[10]Lei Z, Liao S, Pietika&#x, et al. Face Recognition by Exploring Information Jointly in Space, Scale and Orientation[J]. IEEE Transactions on Image Processing, 2011, 20(1):247-56.

[11]Mountrakis G, Im J, Ogole C. Support vector machines in remote sensing: A review[J]. Isprs Journal of Photogrammetry & Remote Sensing, 2011, 66(3):247-259.

[12]Rokach L. Ensemble-based classifiers[J]. Artificial Intelligence Review, 2010, 33(1-2):1-39.

[13]Dietterich T G. Ensemble Methods in Machine Learning[C]// International Workshop on Multiple Classifier Systems. Springer-Verlag, 2000:1-15.

[14]Dietterich T G. An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning, 2000,40(2):139-158

[15]Littlewood B, Miller D R. Conceptual modeling of coincident failures in multiversion software[J]. IEEE Transactions on Software Engineering, 1989, 15(12):1596-1614.

[16]Kuncheva L, Whitaker C J, Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy [J]. Machine Learning, 2003, 51(2): 181-207

[17]Quinlan J R. Improved use of continuous attributes in C4.5[J]. Journal of Artificial Intelligence Research, 1996, 4(1):77-90.

[18]Hunt E B, Marin J, Stone P J. Experiments in induction.[J]. American Journal of Psychology, 1967, 80(4):17-19.

[19]Luxburg U V. A tutorial on spectral clustering[J]. Statistics & Computing, 2007, 17(17):395-416.

[20]Yang J F. A Novel Template Reduction K-Nearest Neighbor Classification Method Based on Weighted Distance[J]. Dianzi Yu Xinxi Xuebao/journal of Electronics & Information Technology, 2011, 33(10):2378-2383.

[21]Chen P H, Fan R E, Lin C J. A study on SMO-type decomposition methods for support vector machines.[J]. IEEE Transactions on Neural Networks, 2006, 17(4):893-908.

[22]Karatzoglou A, Smola A, Hornik K, et al. kernlab - An S4 Package for Kernel Methods in R[J]. Journal of Statistical Software, 2004, 11(i09):721-729.

[23]Hameg S, Lazri M, Ameur S. Using naive Bayes classifier for classification of convective rainfall intensities based on spectral characteristics retrieved from SEVIRI[J]. Journal of Earth System Science, 2016:1-11.

[24]Roy M, Routaray D, Ghosh S, et al. Ensemble of Multilayer Perceptrons for Change Detection in Remotely Sensed Images[J]. IEEE Geoscience & Remote Sensing Letters, 2014, 11(11):49-53.

[25]Wolpert D H, Macready W G. An Efficient Method To Estimate Bagging's Generalization Error[C]// Santa Fe Institute, 1999:41-55.