Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics 2021
DOI: 10.18653/v1/2021.starsem-1.25
|View full text |Cite
|
Sign up to set email alerts
|

Spurious Correlations in Cross-Topic Argument Mining

Abstract: Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on withintopic spurious correlations. We examine the effectiveness of this approach by analysing the output of single-task and multi-task models for cross-topic argument mining through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…As evidenced by several studies that involved manual annotation of texts (Stab and Gurevych, 2014;Kirschner et al, 2015;Habernal and Gurevych, 2017), there is very often disagreement between annotators on the arguments, components of arguments or argument relations conveyed by a text, which in most cases is due to the ambiguity of human language. As shown in Thorn Jakobsen et al (2022), it may also be due to the different backgrounds and demographic characteristics of the annotators. Manual annotation may therefore introduce social bias to the data used to train data-driven argument mining methods and, as a result, also to the methods themselves.…”
Section: Extraction and Annotationmentioning
confidence: 99%
“…As evidenced by several studies that involved manual annotation of texts (Stab and Gurevych, 2014;Kirschner et al, 2015;Habernal and Gurevych, 2017), there is very often disagreement between annotators on the arguments, components of arguments or argument relations conveyed by a text, which in most cases is due to the ambiguity of human language. As shown in Thorn Jakobsen et al (2022), it may also be due to the different backgrounds and demographic characteristics of the annotators. Manual annotation may therefore introduce social bias to the data used to train data-driven argument mining methods and, as a result, also to the methods themselves.…”
Section: Extraction and Annotationmentioning
confidence: 99%
“…Cross-domain classification has been investigated in NLP tasks such as sentiment analysis (Al-Moslmi et al, 2017;Qu et al, 2019;Du et al, 2020), fake news detection (Fung et al, 2021;Silva et al, 2021;Yuan et al, 2021), and argument mining (Al-Khatib et al, 2016;Daxenberger et al, 2017;Thorn Jakobsen et al, 2021). These tasks are similar to value classification in that they aim to classify high-level constructs (such as sentiments and arguments).…”
Section: Cross-domain Nlp Classificationmentioning
confidence: 99%
“…However, applying AM methods for feedback analysis poses three main challenges. First, AM methods generalize poorly across domains [5,6]. Thus, they require large amounts of domain-specific training data, which is often not available.…”
Section: Introductionmentioning
confidence: 99%