Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.541
|View full text |Cite
|
Sign up to set email alerts
|

Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification

Abstract: Corporate mergers and acquisitions (M&A) account for billions of dollars of investment globally every year, and offer an interesting and challenging domain for artificial intelligence. However, in these highly sensitive domains, it is crucial to not only have a highly robust and accurate model, but be able to generate useful explanations to garner a user's trust in the automated system. Regrettably, the recent research regarding eXplainable AI (XAI) in financial text classification has received little to no at… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 37 publications
(24 citation statements)
references
References 20 publications
0
24
0
Order By: Relevance
“…Russell et al [47] propose a mixed integer programming based approach to address the difficulty with generating sensible explanations regarding categorical features. In addition to generic models, counterfactual explanations are also designed in applications like Computer Vision [9,20,27], Natural Language Processing [61,64], and Graph Neural Networks [65,66].…”
Section: Counterfactual Machine Learningmentioning
confidence: 99%
“…Russell et al [47] propose a mixed integer programming based approach to address the difficulty with generating sensible explanations regarding categorical features. In addition to generic models, counterfactual explanations are also designed in applications like Computer Vision [9,20,27], Natural Language Processing [61,64], and Graph Neural Networks [65,66].…”
Section: Counterfactual Machine Learningmentioning
confidence: 99%
“…Jacovi and Goldberg (2020) define contrastive highlights, which are determined by the inclusion of contrastive features; in contrast, our contrastive edits specify how to edit (vs. whether to include) features and can insert new text. 13 Li et al (2020a) generate counterfactuals using linguistically-informed transformations (LIT), and Yang et al (2020) generate counterfactuals for binary financial text classification using grammatically plausible single-word edits (REP-SCD). Because both methods rely on manually curated, task-specific rules, they cannot be easily extended to tasks without predefined label spaces, such as RACE.…”
Section: Related Workmentioning
confidence: 99%
“…This has been explored more in the field of CV Kenny and Keane, 2021), but investigated less in NLP. Recent work (Jacovi and Goldberg, 2020) highlight explanations of a given causal format, and Yang et al (2020a) generate counterfactuals for explaining the prediction of financial text classification. We propose a similar but different research question, that is, whether the automatically generated counterfactual can be used for data augmentation to build more robust models, which has not been considered by the previous methods in XAI (Pedreschi et al, 2019;Slack et al, 2020b;Yang et al, 2020b;Ding et al, 2020).…”
Section: Related Workmentioning
confidence: 99%