2022
DOI: 10.21203/rs.3.rs-2129845/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations

Abstract: Practitioners increasingly use machine learning (ML) models, yet they have become more complex and harder to understand. To address this issue, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use explainability methods because they do not know which to choose and how to interpret the results. We address these challenges by introducing TalkToModel: an interactive dialogue system that enables users to explain ML models through natural language conversations. … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 37 publications
0
7
0
Order By: Relevance
“…These techniques establish connections between the LM's attention patterns and concepts sourced from understandable knowledge repositories. Within this approach, two methods have emerged: (a) Attribution scores and LM tuning (Slack et al 2023) and Factual Knowledge-based Scoring and LM tuning (Yang, Chen, et al 2023;Sun et al 2023). The latter method holds particular significance in health and wellbeing because it provides explainability for clinicians as users.…”
Section: Explainability and User-level Explainable Llms (Uexms)mentioning
confidence: 99%
See 1 more Smart Citation
“…These techniques establish connections between the LM's attention patterns and concepts sourced from understandable knowledge repositories. Within this approach, two methods have emerged: (a) Attribution scores and LM tuning (Slack et al 2023) and Factual Knowledge-based Scoring and LM tuning (Yang, Chen, et al 2023;Sun et al 2023). The latter method holds particular significance in health and wellbeing because it provides explainability for clinicians as users.…”
Section: Explainability and User-level Explainable Llms (Uexms)mentioning
confidence: 99%
“…Within this approach, two methods have emerged: (a) Attribution scores and LM tuning (Slack et al. 2023) and Factual Knowledge‐based Scoring and LM tuning (Yang, Chen, et al. 2023; Sun et al.…”
Section: Defining Consistency Reliability User‐level Explainability A...mentioning
confidence: 99%
“…Current evaluation metrics might not fully capture the subtleties of model robustness, calling for the development of more comprehensive measures [34]- [36]. The impact of model interpretability on robustness has been another focus, with evidence suggesting that more interpretable models can be easier to defend against adversarial attacks [16], [37], [38]. Lastly, the exploration of model robustness across different languages and domains revealed that robustness levels are not uniform, indicating a need for domain-specific strategies to enhance resilience [39].…”
Section: B Robustness Of Large Language Modelsmentioning
confidence: 99%
“…The development and implementation of frameworks for model explainability have been a focal point of research efforts. Such frameworks aim to provide users with understandable explanations for the decisions made by LLMs, often through the generation of natural language explanations or visualizations of the decision path [38,36,28,41,63,64,65]. It was observed that those frameworks significantly improve user trust and satisfaction, particularly in domains where understanding AI decisions is critical [65,27,66,56,57].…”
Section: Framework For Model Explainabilitymentioning
confidence: 99%