Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-demo.6
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Explanations of Text Classifiers as a Service

Abstract: The recent growth of black-box machinelearning methods in data analysis has increased the demand for explanation methods and tools to understand their behaviour and assist human-ML model cooperation. In this paper, we demonstrate ContrXT , a novel approach that uses natural language explanations to help users to comprehend how a back-box model works. ContrXT provides time contrastive (tcontrast) explanations by computing the differences in the classification logic of two different trained models and then reaso… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…Other related approaches enable global (rather than local) explainability (Malandri et al, 2022), or explanation interfaces for non-transformers models on non-NLP tasks (Agarwal et al, 2022). Other approaches study model behavior at the subgroup level (Wang et al, 2021;Goel et al, 2021;Pastor et al, 2021a,b), focusing more on model evaluation and robustness rather than its interpretation.…”
Section: Related Workmentioning
confidence: 99%
“…Other related approaches enable global (rather than local) explainability (Malandri et al, 2022), or explanation interfaces for non-transformers models on non-NLP tasks (Agarwal et al, 2022). Other approaches study model behavior at the subgroup level (Wang et al, 2021;Goel et al, 2021;Pastor et al, 2021a,b), focusing more on model evaluation and robustness rather than its interpretation.…”
Section: Related Workmentioning
confidence: 99%
“…Other related approaches enable global (rather than local) explainability (Malandri et al, 2022), or explanation interfaces for non-transformers models on non-NLP tasks (Agarwal et al, 2022). Other approaches study model behavior at the subgroup level Pastor et al, 2021a,b), focusing more on model evaluation and robustness rather than its interpretation.…”
Section: Introductionmentioning
confidence: 99%