2020
DOI: 10.48550/arxiv.2005.14501
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers

Abstract: Our research aims to propose a new performance-explainability analytical framework to assess and benchmark machine learning methods. The framework details a set of characteristics that operationalize the performance-explainability assessment of existing machine learning methods. In order to illustrate the use of the framework, we apply it to benchmark the current state-of-the-art multivariate time series classifiers.1 https://ec.europa.eu/info/law/law-topic/data-protection en

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…This mental model can be evaluated on criteria such as correctness, comprehensiveness, coherence, and usefulness. Fauvel et al [69] present a framework that assesses and benchmarks machine learning methods on both performance and explainability. Performance is measured compared to the state-of-the-art, best, similar, or below.…”
Section: How To Measure Explainability?mentioning
confidence: 99%
“…This mental model can be evaluated on criteria such as correctness, comprehensiveness, coherence, and usefulness. Fauvel et al [69] present a framework that assesses and benchmarks machine learning methods on both performance and explainability. Performance is measured compared to the state-of-the-art, best, similar, or below.…”
Section: How To Measure Explainability?mentioning
confidence: 99%
“…Fauvel at al. [45] present a framework that assesses and benchmarks machine learning methods on both performance and explainability. For measuring the explainability, they look at model comprehensibility, explanation granularity, information type, faithfulness and user category.…”
Section: How To Measure Explainability?mentioning
confidence: 99%
“…Fauvel et al [121] position their framework as a significant advancement in the fourth step of the method described by Hall et al [122] regarding explainability. Chakrobartty et al [123] further extended this method by introducing two additional evaluation characteristics: fairness context and fairness.…”
Section: Xai For Bias Evaluationmentioning
confidence: 99%