2004
DOI: 10.1016/j.future.2003.11.011
|View full text |Cite
|
Sign up to set email alerts
|

Assessment of the effectiveness of support vector machines for hyperspectral data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
100
0
7

Year Published

2005
2005
2017
2017

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 191 publications
(108 citation statements)
references
References 26 publications
1
100
0
7
Order By: Relevance
“…Non-parametric classifiers are used to estimate the probability density function when it is unknown (Kumar and Sahoo 2012) such as support vector machines (SVM) and artificial neural networks (ANN). Statistical classifiers depend on some predefined data model and the performance of these classifiers depends on how well the data match the predefined model (Pal and Mather 2004). Adam et al (2014) confirmed the performance of machinelearning random forest (RF) and SVM classifiers to map heterogeneous land in South Africa using RapidEye high resolution imagery.…”
Section: Introductionmentioning
confidence: 81%
See 1 more Smart Citation
“…Non-parametric classifiers are used to estimate the probability density function when it is unknown (Kumar and Sahoo 2012) such as support vector machines (SVM) and artificial neural networks (ANN). Statistical classifiers depend on some predefined data model and the performance of these classifiers depends on how well the data match the predefined model (Pal and Mather 2004). Adam et al (2014) confirmed the performance of machinelearning random forest (RF) and SVM classifiers to map heterogeneous land in South Africa using RapidEye high resolution imagery.…”
Section: Introductionmentioning
confidence: 81%
“…Figure 2 illustrates a simple scenario of a two-class separable classification problem in a two-dimensional input space. SVM aim to determine the optimal separating hyperplane (OSH) among all the possible hyperplanes (Srivastava et al 2012) and this is done through an optimization problem using Lagrange multipliers and quadratic programming methods (Pal and Mather 2004).…”
Section: Support Vector Machinesmentioning
confidence: 99%
“…Parametric methods are based on the assumption that the data of each class are normally distributed [7,49,67]; however, non-parametric techniques do not assume specific data class distributions [11,41]. Supervised classification is one of the most commonly used techniques for the classification of remotely sensed data.…”
Section: Classification Methodsmentioning
confidence: 99%
“…In locating the support vectors, SVMs tend to use only a subset of the training data and so they are particularly advocated for use with high-dimensional data sets primarily because it is believed that the decision making is not constrained by the Hughes effect [16,56,57]. Although others dispute this somewhat [58], use of the SVM as a classifier should benefit complex classification problems-e.g., where fine spatial resolution imagery is used to map detailed classification schema in heterogeneous environments-and can perform better than ML classification for urban environments using VHR imagery [59,60]. However, as a pixel-based approach, it may still suffer from within-feature variation leading to some degree of misclassification [1,23].…”
Section: Pixel-based Classificationmentioning
confidence: 99%