Proceedings of the Third Workshop on Argument Mining (ArgMining2016) 2016
DOI: 10.18653/v1/w16-2820
|View full text |Cite
|
Sign up to set email alerts
|

A Preliminary Study of Disputation Behavior in Online Debating Forum

Abstract: In this paper, we propose a task for quality evaluation of disputing argument. In order to understand the disputation behavior, we propose three sub-tasks, detecting disagreement hierarchy, refutation method and argumentation strategy respectively. We first manually labeled a real dataset collected from an online debating forum. The dataset includes 45 disputing argument pairs. The annotation scheme is developed by three NLP researchers via annotating all the argument pairs in the dataset. Two under-graduate s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…Wang and Cardie (2014) create a dispute corpus from Wikipedia and use a sentiment analysis to predict the dispute label of arguments. Wei et al (2016) collect a dataset from CMV and analyze the correlation between disputing quality and disputation behaviors. analyze the disputation action in the online debate.…”
Section: Dialogical Argumentationmentioning
confidence: 99%
“…Wang and Cardie (2014) create a dispute corpus from Wikipedia and use a sentiment analysis to predict the dispute label of arguments. Wei et al (2016) collect a dataset from CMV and analyze the correlation between disputing quality and disputation behaviors. analyze the disputation action in the online debate.…”
Section: Dialogical Argumentationmentioning
confidence: 99%
“…Tan et al (2016); Wei et al (2016a); Wang et al (2017) make use of linguistic features to evaluate the persuasiveness of ar-guments in online forums. Wei et al (2016b); Ji et al (2018) consider features from the perspectives of argumentation interaction between participants. Persing and Ng (2017) construct their model based on error types for argumentation.…”
Section: Related Workmentioning
confidence: 99%
“…Empirical machine learning approaches instead train models using example judgments of arguments, such as those shown in Figure 1. Previous approaches to obtaining such judgments include training annotators to assign scores from 1-6 (Persing and Ng, 2017), asking annotators for simple binary or three-class categories (Wei et al, 2016b), and aggregating binary votes from multiple people (Wei et al, 2016a;Tan et al, 2016). However, these approaches are limited by the cost of training annotators, a highly restricted set of categories, or the need for multiple annotators per document.…”
Section: Introductionmentioning
confidence: 99%