Proceedings of the 5th Workshop on Argument Mining 2018
DOI: 10.18653/v1/w18-5202
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Argument Mining for Discussion Threads Based on Parallel Constrained Pointer Architecture

Abstract: Argument Mining (AM) is a relatively recent discipline, which concentrates on extracting claims or premises from discourses, and inferring their structures. However, many existing works do not consider micro-level AM studies on discussion threads sufficiently. In this paper, we tackle AM for discussion threads. Our main contributions are follows: (1) A novel combination scheme focusing on micro-level inner-and inter-post schemes for a discussion thread. (2) Annotation of large-scale civic discussion threads wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(20 citation statements)
references
References 29 publications
(50 reference statements)
1
19
0
Order By: Relevance
“…Meanwhile, User D disagrees with the reasoning that the past is always a good predictor of current events. We obtain moderate agreement for relation annotations, similar to other argumentative tasks (Morio and Fujita, 2018). The Inter-Annotator Agreement (IAA) with Kripendorff's α is 0.61 for relation presence and 0.63 for relation types.…”
Section: Labeled Persuasive Forum Datasupporting
confidence: 71%
See 2 more Smart Citations
“…Meanwhile, User D disagrees with the reasoning that the past is always a good predictor of current events. We obtain moderate agreement for relation annotations, similar to other argumentative tasks (Morio and Fujita, 2018). The Inter-Annotator Agreement (IAA) with Kripendorff's α is 0.61 for relation presence and 0.63 for relation types.…”
Section: Labeled Persuasive Forum Datasupporting
confidence: 71%
“…Intra-turn Argumentative Relations As in previous work (Morio and Fujita, 2018), we restrict intra-turn relations to be between a premise and another claim or premise, where the premise either supports or attacks the claim or other premise. Evidence in the form of a premise is either support or attack.…”
Section: Labeled Persuasive Forum Datamentioning
confidence: 99%
See 1 more Smart Citation
“…To do that, they employed additive attention. A similar approach has been applied by Morio and Fujita (2018) for a three-label classification task (claim, premise or non-argumentative).…”
Section: Related Workmentioning
confidence: 99%
“…A potential expla-nation is the fact that we use the attention mechanism as an additional layer to encode the input. Other approaches, like Morio and Fujita (2018) or Stab et al (2018), incorporate it into the Bi-LSTM architecture and calculate the weight of the hidden states at every time step. While the performance does not decrease meaningfully for the baseline +input and bilstm +input models (using the GloVe embeddings as features), it does for the error encoding baseline +error model.…”
Section: Attention Layersmentioning
confidence: 99%