2009
DOI: 10.1016/j.chemolab.2008.07.010
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of performance of five common classifiers represented as boundary methods: Euclidean Distance to Centroids, Linear Discriminant Analysis, Quadratic Discriminant Analysis, Learning Vector Quantization and Support Vector Machines, as dependent on data structure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
113
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 185 publications
(114 citation statements)
references
References 24 publications
1
113
0
Order By: Relevance
“…The main differences between the LDA and QDA are in the calculation of the classification score, in which QDA presents a more sophisticated approach accounting for different variance structures in the classes being analyzed. The LDA technique assumes that the classes have similar variance matrices, whereas QDA forms a separated variance model for each class (51). LDA and QDA classification scores are calculated as follows:…”
Section: Methodsmentioning
confidence: 99%
“…The main differences between the LDA and QDA are in the calculation of the classification score, in which QDA presents a more sophisticated approach accounting for different variance structures in the classes being analyzed. The LDA technique assumes that the classes have similar variance matrices, whereas QDA forms a separated variance model for each class (51). LDA and QDA classification scores are calculated as follows:…”
Section: Methodsmentioning
confidence: 99%
“…classes can be separated by lines in 2 dimensions, by planes in 3 dimensions and by hyperplanes in higher dimensional spaces). Potential drawbacks to LDA include weak performance when groups are strongly nested and a tendency towards overfitting (Dixon & Brereton 2009).…”
Section: Data Fish Collectionmentioning
confidence: 99%
“…In this study, the bootstrap [18][19][20][21][22] using 200 repetitions was used for determining the optimum number of PCs. Note that when performing the bootstrap, the bootstrap training set (which contains repetitions) is standardised, rather than the entire training or autopredictive dataset.…”
Section: Bootstrapmentioning
confidence: 99%
“…Whereas most methods for MSPC should be implementable online, procedures such as the bootstrap [18][19][20][21][22] and repeated division into training and test sets [11,[20][21][22] can usually be performed within seconds or minutes, making computationally intensive approaches for online monitoring feasible for real-time implementations.…”
mentioning
confidence: 99%