2012
DOI: 10.1109/jstars.2012.2190266
|View full text |Cite
|
Sign up to set email alerts
|

Classification in High-Dimensional Feature Spaces—Assessment Using SVM, IVM and RVM With Focus on Simulated EnMAP Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 54 publications
(33 citation statements)
references
References 14 publications
0
32
1
Order By: Relevance
“…Therefore the implemented version of IVM utilised here is a hybrid forward/backward strategy that adds import vectors whilst concurrently testing if they can be removed in each step, thus leading to a sparse and more accurate solution [41]. Furthermore, the IVM selects data points from the entire distribution resulting in a smoother decision boundary which is based on the optimal separating hyperplane in multidimensional space compared to that of SVM algorithms [42]. The benefits of the IVM algorithm have resulted in this approach being successfully applied in a number of studies (e.g., [42][43][44][45]) due to its accuracy and performance advantages over alternative methodologies including SVM and the traditional Maximum Likelihood (ML) classifiers [44,45].…”
Section: Data Classificationmentioning
confidence: 99%
“…Therefore the implemented version of IVM utilised here is a hybrid forward/backward strategy that adds import vectors whilst concurrently testing if they can be removed in each step, thus leading to a sparse and more accurate solution [41]. Furthermore, the IVM selects data points from the entire distribution resulting in a smoother decision boundary which is based on the optimal separating hyperplane in multidimensional space compared to that of SVM algorithms [42]. The benefits of the IVM algorithm have resulted in this approach being successfully applied in a number of studies (e.g., [42][43][44][45]) due to its accuracy and performance advantages over alternative methodologies including SVM and the traditional Maximum Likelihood (ML) classifiers [44,45].…”
Section: Data Classificationmentioning
confidence: 99%
“…IVM generates two outputs: a soft (subpixel) data set that defines the probability of a pixel containing a given classification value (e.g. land-cover type) and a traditional 'hardened' classified data set (Braun, Weidner, and Hinz 2012). Training samples were collected from the 12 and 19 July 2005 Landsat 5 TM image composite, coinciding with peak vegetation greenness which provides the greatest spectral separability between vegetated and non-vegetated surfaces (Feyisa et al 2016;Chen et al 2014).…”
Section: Landsat Classificationmentioning
confidence: 99%
“…IVM has been found to consistently outperform decision trees, artificial neural networks, and maximum likelihood algorithms (Watanachaturaporn, Arora, and Varshney 2008;Kotsiantis, Zaharakis, and Pintelas 2006;Huang, Davis, and Townshend 2002), with preferential (Braun, Weidner, and Hinz 2012) and comparable results to SVM (Roscher, Waske, and Forstner 2010). However, due to the heterogeneity of urban areas, it is important to calibrate these subpixel approaches against high spatial resolution data that capture the diverse characteristics found within urban environments (Lu, Moran, and Hetrick 2011).…”
Section: Introductionmentioning
confidence: 99%
“…In addition, the significance of differences among these classification algorithms was estimated using the McNemar's significance testing. The McNemar's test based on a chi-square statistics was done using the following definition [52,53]:…”
Section: Accuracy Assessment and Error And Uncertainty Analysismentioning
confidence: 99%