Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.306
|View full text |Cite
|
Sign up to set email alerts
|

Generating Informative Conclusions for Argumentative Texts

Abstract: The purpose of an argumentative text is to support a certain conclusion. Yet, they are often omitted, expecting readers to infer them rather. While appropriate when reading an individual text, this rhetorical device limits accessibility when browsing many texts (e.g., on a search engine or on social media). In these scenarios, an explicit conclusion makes for a good candidate summary of an argumentative text. This is especially true if the conclusion is informative, emphasizing specific concepts from the text.… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(15 citation statements)
references
References 43 publications
1
14
0
Order By: Relevance
“…Results Table 1 lists the results of the approaches. BART-unsupervised is a strong baseline in terms of lexical accuracy: Values such as 19.69 (ROUGE-1) and 16.40 (ROUGE-L) are comparable to those that Syed et al (2021) achieved in similar domains with sophisticated approaches. However, finetuning on the argument-argument pairs does not only significantly increase the semantic similarity between generated and ground truth conclusions from 0.14 to 0.25 in terms of BERTScore, but it also leads to a slight increase in lexical accuracy.…”
Section: Automatic Evaluationmentioning
confidence: 73%
See 2 more Smart Citations
“…Results Table 1 lists the results of the approaches. BART-unsupervised is a strong baseline in terms of lexical accuracy: Values such as 19.69 (ROUGE-1) and 16.40 (ROUGE-L) are comparable to those that Syed et al (2021) achieved in similar domains with sophisticated approaches. However, finetuning on the argument-argument pairs does not only significantly increase the semantic similarity between generated and ground truth conclusions from 0.14 to 0.25 in terms of BERTScore, but it also leads to a slight increase in lexical accuracy.…”
Section: Automatic Evaluationmentioning
confidence: 73%
“…The idea of reconstructing an argument's conclusion from its premises was introduced by Alshomary et al ( 2020), but their approach focused on the inference of a conclusion's target. The actual generation of entire conclusions has so far only been studied by Syed et al (2021). The authors presented the first corpus for this task along with experiments where they adapted BART (Lewis et al, 2020) from summarization to conclusion generation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Akiki and Potthast (2020) explore abstractive argument retrieval by means of text generation with GPT2 (Radford et al 2019). Similarly, Syed et al (2021) deploy BART (Lewis et al 2019) to generate conclusions of argumentative texts on a challenging corpus compiled from Reddit and various online debate corpora. Rodrigues et al (2020), revisiting the argument comprehension task (Habernal, Eckle-Kohler, and Gurevych 2014;Habernal et al 2018), demonstrate that identifying implicit premises -and deep argument analysis a fortiori -remains a hard, unsolved task.…”
Section: Related Workmentioning
confidence: 99%
“…Wang and Ling (2016) used a sequence-to-sequence model for the abstractive summarization of arguments from online debate portals. A complementary task of generating conclusions as informative argument summaries was introduced by Syed et al (2021). Similar to Alshomary et al (2020b) who inferred a conclusion's target with a triplet neural network, we rely on contrastive learning here, using a siamese network though.…”
Section: Related Workmentioning
confidence: 99%