2015
DOI: 10.1016/j.image.2015.09.004
|View full text |Cite
|
Sign up to set email alerts
|

Multi-channel descriptors and ensemble of Extreme Learning Machines for classification of remote sensing images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…The reasons are that images of beach and forest present rich texture and structures information; within-class similarity for the beach and forest categories is high but relatively low for category of storage tank; and some images of storage tank are similar to other class such as buildings. Method Accuracy(Mean ± std) BOVW [28] 76.8 SPM [28] 75.3 BOVW + Spatial Co-occurrence Kernel [28] 77.7 Color Gabor [28] 80.5 Color histogram (HLS) [28] 81.2 Structural texture similarity [7] 86.0 Unsupervised feature learning [33] 81.7 ± 1.2 Saliency-Guided unsupervised feature learning [34] 82.7 ± 1.2 Concentric circle-structured multiscale BOVW [5] 86.6 ± 0.8 Multifeature concatenation [35] 89.5 ± 0.8 Pyramid-of-Spatial-Relatons (PSR) [36] 89.1 MCBGP + E-ELM [37] 86.52 ± 1.3 ConvNet with specific spatial features [38] 89.39 ± 1.10 gradient boosting randomconvolutional network [39] 94.53 GoogLeNet [40] 92.80 ± 0.61 OverFeatConvNets [40] 90.91 ± 1.19 MS-CLBP [17] 90.6 ± 1. When compared with CNNs, it can be found that the classification accuracy of CNNs is close to that of our method.…”
Section: Methodsmentioning
confidence: 99%
“…The reasons are that images of beach and forest present rich texture and structures information; within-class similarity for the beach and forest categories is high but relatively low for category of storage tank; and some images of storage tank are similar to other class such as buildings. Method Accuracy(Mean ± std) BOVW [28] 76.8 SPM [28] 75.3 BOVW + Spatial Co-occurrence Kernel [28] 77.7 Color Gabor [28] 80.5 Color histogram (HLS) [28] 81.2 Structural texture similarity [7] 86.0 Unsupervised feature learning [33] 81.7 ± 1.2 Saliency-Guided unsupervised feature learning [34] 82.7 ± 1.2 Concentric circle-structured multiscale BOVW [5] 86.6 ± 0.8 Multifeature concatenation [35] 89.5 ± 0.8 Pyramid-of-Spatial-Relatons (PSR) [36] 89.1 MCBGP + E-ELM [37] 86.52 ± 1.3 ConvNet with specific spatial features [38] 89.39 ± 1.10 gradient boosting randomconvolutional network [39] 94.53 GoogLeNet [40] 92.80 ± 0.61 OverFeatConvNets [40] 90.91 ± 1.19 MS-CLBP [17] 90.6 ± 1. When compared with CNNs, it can be found that the classification accuracy of CNNs is close to that of our method.…”
Section: Methodsmentioning
confidence: 99%
“…Ensembles of ELMs are introduced to overcome known drawbacks of ELM caused by the randomness of input weights and biases [1,9]. In order to improve accuracy of an ensemble, we proposed to directly accumulate output scores of the ELMs into a final decision [4], instead of accumulating their binary votes. The proposed algorithm, denoted as Average Score Aggregation (ASA), is described below.…”
Section: Ensemble Of Elms (E-elm) Using Average Score Aggregationmentioning
confidence: 99%
“…Classification accuracy is measured by varying number of hidden neurons per ELM for a fixed number of ELMs in ensemble (k = 15). We used ASA strategy [4] as aggregation method. Results are presented in Figure 3.…”
Section: Algorithm 2 E-elm Testing Phasementioning
confidence: 99%
See 1 more Smart Citation
“…By extracting features of the input data from the bottom to the top of the network, deep-learning models can form the high-level abstract features suitable for pattern classification [26]. Among numerous deep-learning models, a convolutional neural network (CNN) has a relatively small number of weights owing to local connections and sharing weights [27]. Moreover, the multidimensional tensor data, for instance HSIs, can be directly input into 3D convolutional neural networks (3D-CNNs), which helps to preserve the original relevant information of the data and avoids complex data reconstruction [28][29][30].…”
Section: Introductionmentioning
confidence: 99%