2022
DOI: 10.48550/arxiv.2202.11748
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Need for Interpretable Features: Motivation and Taxonomy

Abstract: Through extensive experience developing and explaining machine learning (ML) applications for real-world domains, we have learned that ML models are only as interpretable as their features. Even simple, highly interpretable model types such as regression models can be difficult or impossible to understand if they use uninterpretable features. Different users, especially those using ML models for decision-making in their domains, may require different levels and types of feature interpretability. Furthermore, b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 12 publications
0
1
0
Order By: Relevance
“…By identifying avenues for model performance improvement, xAI can support research conclusions and guide research advancement. For example, if a network model predicts a heart disease patient's health risk, a clinician would want to understand how strongly the patient's heart rate data influences that prediction [142]. To solve this problem, xAI has been developed to make models transparent.…”
Section: Sequencesmentioning
confidence: 99%
“…By identifying avenues for model performance improvement, xAI can support research conclusions and guide research advancement. For example, if a network model predicts a heart disease patient's health risk, a clinician would want to understand how strongly the patient's heart rate data influences that prediction [142]. To solve this problem, xAI has been developed to make models transparent.…”
Section: Sequencesmentioning
confidence: 99%