Proceedings of the 6th Workshop on Argument Mining 2019
DOI: 10.18653/v1/w19-4509
|View full text |Cite
|
Sign up to set email alerts
|

Is It Worth the Attention? A Comparative Evaluation of Attention Layers for Argument Unit Segmentation

Abstract: Attention mechanisms have seen some success for natural language processing downstream tasks in recent years and generated new Stateof-the-Art results. A thorough evaluation of the attention mechanism for the task of Argumentation Mining is missing, though. With this paper, we report a comparative evaluation of attention layers in combination with a bidirectional long short-term memory network, which is the current state-of-the-art approach to the unit segmentation task. We also compare sentence-level contextu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 17 publications
1
6
0
Order By: Relevance
“…An empirical evaluation is thus beyond the scope of this article. There are, however, a number of experimental studies focused on particular NLP tasks, including machine translation [37], [42], [48], [132], argumentation mining [125], text summarization [58], and sentiment analysis [7]. It is worthwhile remarking that, on several occasions, attention-based approaches enabled a dramatic development of entire research lines.…”
Section: Introductionmentioning
confidence: 99%
“…An empirical evaluation is thus beyond the scope of this article. There are, however, a number of experimental studies focused on particular NLP tasks, including machine translation [37], [42], [48], [132], argumentation mining [125], text summarization [58], and sentiment analysis [7]. It is worthwhile remarking that, on several occasions, attention-based approaches enabled a dramatic development of entire research lines.…”
Section: Introductionmentioning
confidence: 99%
“…The authors noted an decreased number of invalid BI sequences with the addition of the second (upper) BiL-STM. In later work, the authors in [33] further investigated this architecture with minor changes: they used solely one BiLSTM with word embeddings as input features and tested the efficacy of the second (upper) BiLSTM. Moreover, they investigated the effects of adding various attention layers.…”
Section: Related Workmentioning
confidence: 99%
“…PE 2.0 annotates three kinds of argumentation components, namely major claim (MC), claim (C) and premise (P). Many previous researches (Persing and Ng, 2016;Chernodub et al, 2019;Petasis, 2019;Reimers et al, 2019;Spliethover et al, 2019)…”
Section: Datasetmentioning
confidence: 99%
“…Chernodub et al (2019) tried to build application interface, which is called TARGER and is a BiLSTM-CNN-CRF sequence tagging model, for convenient argumentation mining on essays. Besides, latest research (Petasis, 2019;Spliethover et al, 2019) also aims to distinguish argumentation components from non-argumentation components with text segmentation based on sequence tagging models. Other work Peldszus et al, 2016;Skeppstedt et al, 2018) focuses on arg-microtext corpus , which contains 112 independent short texts, where each can be considered as one paragraph and contains about 5 argumentation components on average.…”
Section: Argumentation Extraction or Argumentation Taggingmentioning
confidence: 99%