Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.478
|View full text |Cite
|
Sign up to set email alerts
|

Argumentation Mining on Essays at Multi Scales

Abstract: Argumentation mining on essays is a new challenging task in natural language processing, which aims to identify the types and locations of argumentation components. Recent research mainly models the task as a sequence tagging problem and deal with all the argumentation components at word level. However, this task is not scale-independent. Some types of argumentation components which serve as core opinions on essays or paragraphs, are at essay level or paragraph level. Sequence tagging method conducts reasoning… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…They can appear anywhere in a paragraph, either proposed at the beginning, summarized in the end, or given in the middle. They are at the paragraph level ( Wang et al, 2020 ). Similarly, the different elements in Toulmin models correspond to token-lever.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…They can appear anywhere in a paragraph, either proposed at the beginning, summarized in the end, or given in the middle. They are at the paragraph level ( Wang et al, 2020 ). Similarly, the different elements in Toulmin models correspond to token-lever.…”
Section: Discussionmentioning
confidence: 99%
“…Taking the overall performance into account, Wang et al (2020) proposed a multi-scale mining model to mine the argument elements (major claim, claim, and premise) at the discourse level, paragraph level, and word level. They also designed an effective coarse-to-fine argument fusion mechanism to further improve the precision rate.…”
Section: Related Workmentioning
confidence: 99%
“…The standard BiLSTM with a CRF output layer emerged as the state-of-the-art architecture for token-level sequence tagging, including argument mining [9,17,59]. Current state-ofthe-art on ADU identification and classification employs BERT [15] or Longformer [3] as base encoders (in some cases, with a CRF layer on top), typically accompanied with specific architectures to tackle a target corpus or task-specific challenges [16,24,41,69,71]. We follow these recent trends by employing a BERT-based sequence labeling model.…”
Section: Related Workmentioning
confidence: 99%
“…Mayer et al (2020) presented a BERT model with a component pair classifier for predicting relations. Wang et al (2020) also used BERT in encoders. In contrast to the above studies, since a given input argument can have many sentences, we use Longformer (Beltagy et al, 2020) to encode text, as it can handle a longer input sequence than BERT.…”
Section: Am As Relationmentioning
confidence: 99%