2014
DOI: 10.1016/j.neucom.2013.09.048
|View full text |Cite
|
Sign up to set email alerts
|

Classification in high-dimensional spectral data: Accuracy vs. interpretability vs. model size

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 34 publications
(29 citation statements)
references
References 10 publications
0
29
0
Order By: Relevance
“…However, metric learning can be integrated efficiently into the classification scheme, and results from learning theory can be derived by referring to the resulting function class. This has been demonstrated in the context of learning vector quantization (LVQ), where metric learning opened the way towards efficient state-of-the-art results in various areas, including biomedical data analysis, robotic vision, and spectral analysis [4,19,1]. Because of the intuitive definition of models in terms of prototypical representatives, prototype-based methods like LVQ enjoy a wide popularity in application domains, particularly if human inspection and interaction are necessary, or life-long model adaptation is considered [29,20,18].…”
Section: Motivation and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, metric learning can be integrated efficiently into the classification scheme, and results from learning theory can be derived by referring to the resulting function class. This has been demonstrated in the context of learning vector quantization (LVQ), where metric learning opened the way towards efficient state-of-the-art results in various areas, including biomedical data analysis, robotic vision, and spectral analysis [4,19,1]. Because of the intuitive definition of models in terms of prototypical representatives, prototype-based methods like LVQ enjoy a wide popularity in application domains, particularly if human inspection and interaction are necessary, or life-long model adaptation is considered [29,20,18].…”
Section: Motivation and Related Workmentioning
confidence: 99%
“…Again, the introduction of any gaps is crucial for class-discrimination, so a minimum of the error surface is expected for settings where both costs λ A− and λ B− , become high. Figure 3 shows E RGLVQ for configurations (λ A− , λ B− ) in increasing steps of 0.1 over the interval [0,1]. The remaining third parameter in λ is fixed to the final value after training, in this case it is close to the small constant λ AB ≈ ǫ.…”
Section: Rglvq Error Function Surfacementioning
confidence: 99%
“…Equation 5 consists of univariate or multivariate analyses; Raman band-based, component analysis score-based, or cluster-based component mapping; clustering and classification procedures; mixture deconvolution; or other ways in which to selectively emphasize or quantify the information of interest. [35][36][37][38] We next define preprocessing and data analysis as stages while, within each stage, steps denote those particular processes required to achieve specific objectives (more details below). Eqs.…”
Section: Theorymentioning
confidence: 99%
“…These issues are not only pure classification accuracy and false positive/negative rates but also comprise facets like interpretability and class representation, model size, classification guarantee and other [3,45,44]. In the following we will graze some of these aspects without any claim of completeness.…”
Section: Attempts To Improve the Classifier Performancementioning
confidence: 99%
“…Thus, classifier models have to be designed to handle different classification criteria. Besides these objectives also other criteria might be of interest like classifier model complexity, the interpretability of the results or the suitability for real time applications [3].…”
Section: Introductionmentioning
confidence: 99%