Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/570
|View full text |Cite
|
Sign up to set email alerts
|

Why Can't You Convince Me? Modeling Weaknesses in Unpersuasive Arguments

Abstract: Recent work on argument persuasiveness has focused on determining how persuasive an argument is. Oftentimes, however, it is equally important to understand why an argument is unpersuasive, as it is difficult for an author to make her argument more persuasive unless she first knows what errors made it unpersuasive. Motivated by this practical concern, we (1) annotate a corpus of debate comments with not only their persuasiveness scores but also the errors they contain, (2) propose an approach to persuasiveness … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
33
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(34 citation statements)
references
References 7 publications
1
33
0
Order By: Relevance
“…For n-grams, the relationship to convincingness may be topic-specific, hence they are not identified as important when the model is trained on 31 different topics. The fact that MLII did not substantially shorten the length-scales for ngrams and POS n-grams corresponds to previous re-sults (Persing and Ng, 2017), which found these feature sets less informative than other argumentrelated feature sets. Table 4 also presents a breakdown of the "other" features into sentiment, ratio, count and NER features.…”
Section: Relevant Feature Determinationsupporting
confidence: 83%
“…For n-grams, the relationship to convincingness may be topic-specific, hence they are not identified as important when the model is trained on 31 different topics. The fact that MLII did not substantially shorten the length-scales for ngrams and POS n-grams corresponds to previous re-sults (Persing and Ng, 2017), which found these feature sets less informative than other argumentrelated feature sets. Table 4 also presents a breakdown of the "other" features into sentiment, ratio, count and NER features.…”
Section: Relevant Feature Determinationsupporting
confidence: 83%
“…ness. Persing and Ng (2017) identify factors that negatively impact persuasiveness, so their factors, unlike ours, cannot explain what makes an argument persuasive. Other argument mining tasks.…”
Section: Related Workmentioning
confidence: 87%
“…As a fast growing sub-field of computational argumentation mining [35,41], previous work in this area mostly work on the identification of convincing arguments [13,44] and viewpoints [14,19] from varying argumentation genres, such as social media discussions [37], political debates [4], and student essays [6]. In this line, many existing studies focus on crafting hand-made features [37,44], such as wordings and topic strengths [43,53], echoed words [2], semantic and syntactic rules [15,30], participants' personality [42], argument interactions and structure [29], and so forth. These methods, however, require labor-intensive feature engineering process, and hence have limited generalization abilities to new domains.…”
Section: Related Work 21 Argument Persuasivenessmentioning
confidence: 99%