Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.477
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Study on Explanations in Out-of-Domain Settings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 0 publications
1
9
0
Order By: Relevance
“…Our approach shows consistent cross-corpora performance improvements both independently and in combination with pre-defined tokens. Future work includes applying our method on other cross-domain text classification tasks and exploring how explanation faithfulness can be improved in out-of-domain settings (Chrysostomou and Aletras, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…Our approach shows consistent cross-corpora performance improvements both independently and in combination with pre-defined tokens. Future work includes applying our method on other cross-domain text classification tasks and exploring how explanation faithfulness can be improved in out-of-domain settings (Chrysostomou and Aletras, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…Yao et al (2021) do not use any list but they require human-provided refinement advice as inputs. Chrysostomou and Aletras (2022a) further show that post-hoc explanation methods might not provide faithful explanations in out-of-domain settings. The contemporaneous work by Attanasio et al (2022) and Bose et al (2022) reduce lexical overfitting automatically with entropy-based attentions and feature attributions, respectively.…”
Section: Introductionmentioning
confidence: 87%
“…The ante-hoc explanation approach has also been referred to as a pipeline , select-then-predict (Chrysostomou and Aletras, 2022), and explain-then-predict (Camburu et al, 2018) setup. The annotator model can be trained separately from the ante-hoc explainer model (Yessenalina et al, 2010;Jain et al, 2020) or the models can be trained jointly (Lei et al, 2016;Bastings et al, 2019).…”
Section: Figurementioning
confidence: 99%