Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1143
|View full text |Cite
|
Sign up to set email alerts
|

Here's My Point: Joint Pointer Architecture for Argument Mining

Abstract: In order to determine argument structure in text, one must understand how individual components of the overall argument are linked. This work presents the first neural network-based approach to link extraction in argument mining. Specifically, we propose a novel architecture that applies Pointer Network sequence-tosequence attention modeling to structural prediction in discourse parsing tasks. We then develop a joint model that extends this architecture to simultaneously address the link extraction task and th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
57
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(61 citation statements)
references
References 28 publications
0
57
0
Order By: Relevance
“…Finally, to substantiate (ii), we reproduced an experiment on automatic prediction of the argu-mentation structure, which showed that predicting on the crowdsourced texts is generally not harder than on the old ones, and that overall, the task can benefit from the increased corpus size, though not dramatically. But we expect the increased corpus size to be useful for other machine learning experiments, especially for neural network approaches, such as those recently run by Potash et al (2017) on the old corpus (albeit using only a small part of the annotations for a simplified setting).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, to substantiate (ii), we reproduced an experiment on automatic prediction of the argu-mentation structure, which showed that predicting on the crowdsourced texts is generally not harder than on the old ones, and that overall, the task can benefit from the increased corpus size, though not dramatically. But we expect the increased corpus size to be useful for other machine learning experiments, especially for neural network approaches, such as those recently run by Potash et al (2017) on the old corpus (albeit using only a small part of the annotations for a simplified setting).…”
Section: Discussionmentioning
confidence: 99%
“…The argumentation structure of every text was annotated according to a scheme proposed by Peldszus and Stede (2013), which in turn had been based on Freeman's theory of argumentation structures (Freeman, 2011). This annotation scheme has already been proven to yield reliable structures in annotation and classification experiments, for instance by Potash et al, 2017). (Stab and Gurevych, 2017) use a similar scheme for their corpus of persuasive essay, and they also provide classification results for the 2 http://angcl.ling.uni-potsdam.de/resources/argmicro.html microtext corpus.…”
Section: Annotation Schemementioning
confidence: 99%
“…In (Potash et al, 2017), component classification and relation identification between ACs are performed simultaneously using a PointerNet neural network. This improves the classification performance.…”
Section: Extracting the Argument Component (Ac Formentioning
confidence: 99%
“…Eger et al (2017) propose an end-to-end AM system by framing the task as a token-level dependency parser and sequence tagging problem. Potash et al (2017) use an encoder-decoder problem formulation by employing a pointer network based deep neural network architecture. The results reported by Potash et al (0.767 macro F1-score) constitute the current state-of-the-art on the Persuasive Essays corpus (Stab and Gurevych, 2017) for the subtask of argumentative relation identification.…”
Section: Related Workmentioning
confidence: 99%
“…Existing state-of-the-art work on the Argumentative Essays corpus for the subtask of argumentative relation identification reports, as macro F1scores, 0.751 (Stab and Gurevych, 2017), 0.756 (Nguyen and Litman (2016), in an initial release of the Argumentative Essays corpus containing 90 essays) and 0.767 (Potash et al, 2017). Finally, Eger et al (2017) reported a F1-score of 0.455 (100% token level match) and 0.501 (50% token level match), but these scores are dependent on the classification of the components in the previous steps (the problem was modeled differently).…”
Section: Modelmentioning
confidence: 99%