2023
DOI: 10.1016/j.neucom.2023.02.040
|View full text |Cite
|
Sign up to set email alerts
|

The coming of age of interpretable and explainable machine learning models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 57 publications
(19 citation statements)
references
References 95 publications
1
18
0
Order By: Relevance
“…This is key to enable the practical implementations of the models section. 34 We can partially compare our results with those reported by Vilamala et al and Ortega-Martorell et al, 19,21 as the same INTERPRET database was analyzed. In Vilamala et al, 19 the authors evaluated D-C-NMF on the same INTERPRET dataset for the following binary classification tasks: ME versus A2, GL versus A2, GL versus ME, A2 versus NO, ME versus NO, and GL versus NO, at both short and long echo times.…”
Section: Problem 6 (A2 Vs Agg Vs Mm): Can We Distinguish Between Grad...supporting
confidence: 63%
See 1 more Smart Citation
“…This is key to enable the practical implementations of the models section. 34 We can partially compare our results with those reported by Vilamala et al and Ortega-Martorell et al, 19,21 as the same INTERPRET database was analyzed. In Vilamala et al, 19 the authors evaluated D-C-NMF on the same INTERPRET dataset for the following binary classification tasks: ME versus A2, GL versus A2, GL versus ME, A2 versus NO, ME versus NO, and GL versus NO, at both short and long echo times.…”
Section: Problem 6 (A2 Vs Agg Vs Mm): Can We Distinguish Between Grad...supporting
confidence: 63%
“…Second, because LDA results are straightforwardly interpretable, which should help the radiologist provide explanations for the diagnostic decisions made by the models. This is key to enable the practical implementations of the models section 34 …”
Section: Discussionmentioning
confidence: 99%
“…. , C} such that a data sample x is classified according to c (x) = c w s(x) using the WTA rule [46].Unsupervised learning of the prototypes follows several schemes: The most prominent is standard k-means or its improved variants like k-means++ or neural gas [47,48,49], which use stochastic gradient descent learning (SGDL) or expectation-maximization (EM) optimization. Prototype-based classification learning is based on the famous learning vector quantization (LVQ) approaches originally suggested by Kohonen [50].…”
Section: Standard Vector Quantizationmentioning
confidence: 99%
“…Today it is based on strong mathematical foundations known as generalized LVQ (GLVQ) usually trained by SGDL [51]. NPC guaranties interpretability of vector quantizers for both unsupervised and supervised learning [46]. Further, if the squared Euclidean distance is used for training together with SGDL, prototype adaptations are by simple vector shifts in the data space.…”
Section: Standard Vector Quantizationmentioning
confidence: 99%
“…Further, the popularity of vector quantization methods arises from their intuitive problem understanding and the resulting interpretable model behavior [ 8 , 10 , 18 , 19 ], which frequently is demanded for acceptance of machine learning methods in technical or biomedical applications [ 20 , 21 , 22 ]. Although these methods are of only lightweight complexity compared to deep networks, frequently sufficient performance is achieved.…”
Section: Introductionmentioning
confidence: 99%