Proceedings of the 6th Workshop on Argument Mining 2019
DOI: 10.18653/v1/w19-4508
|View full text |Cite
|
Sign up to set email alerts
|

Lexicon Guided Attentive Neural Network Model for Argument Mining

Abstract: Identification of argumentative components is an important stage of argument mining. Lexicon information is reported as one of the most frequently used features in the argument mining research. In this paper, we propose a methodology to integrate lexicon information into a neural network model by attention mechanism. We conduct experiments on the UKP dataset, which is collected from heterogeneous sources and contains several text types, e.g., microblog, Wikipedia, and news. We explore lexicons from various app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Among the AM systems that use neural attention, the one used in [29] integrate hierarchical attention and biGRU for the analysis of the quality of the argument, the one in [30] use attention to integrate sentiment lexicon, while in other works [31], [32], [33] attention modules are stacked on top of recurrent layers. The use of Pointer Networks for AM has also been investigated [34].…”
Section: B Neural Attention For Ammentioning
confidence: 99%
“…Among the AM systems that use neural attention, the one used in [29] integrate hierarchical attention and biGRU for the analysis of the quality of the argument, the one in [30] use attention to integrate sentiment lexicon, while in other works [31], [32], [33] attention modules are stacked on top of recurrent layers. The use of Pointer Networks for AM has also been investigated [34].…”
Section: B Neural Attention For Ammentioning
confidence: 99%
“…Among the AM systems that use neural attention, the one used in [18] integrate hierarchical attention and biGRU for the analysis of the quality of the argument, the one in [19] use attention to integrate sentiment lexicon, while in other works [20]- [22] attention modules are stacked on top of recurrent layers. The use of Pointer Networks for AM has also been investigated [23].…”
Section: B Neural Attention For Ammentioning
confidence: 99%
“…However, it should still be unambiguously supportive of or against a topic. Claims will not be annotated as an argument unless they include some evidence or reasoning behind the claim; however, Lin et al (2019) do find a few wrongly annotated sentences in this regard. The corpus comes with a fixed 70-10-20 split.…”
Section: Datasetsmentioning
confidence: 99%