Proceedings of the 8th Workshop on Argument Mining 2021
DOI: 10.18653/v1/2021.argmining-1.3
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Unsupervised Argument Similarity Rating with Abstract Meaning Representation and Conclusion Generation

Abstract: When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings. Hence, the features that determine argument similarity remain elusive. We address this issue by introducing novel argument similarity metrics that aim at high performance and explainability. We show that Abstract Meaning Representation (AMR) graphs can be useful for representing arguments, and that novel AMR graph metrics can offer explanations for arg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 45 publications
0
9
0
Order By: Relevance
“…We follow previous work by Opitz et al (2021) and rely on T5 (Raffel et al, 2020) (large version) as transformer language model, as implemented in the huggingface library (Wolf et al, 2020).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…We follow previous work by Opitz et al (2021) and rely on T5 (Raffel et al, 2020) (large version) as transformer language model, as implemented in the huggingface library (Wolf et al, 2020).…”
Section: Methodsmentioning
confidence: 99%
“…by rephrasing the premise, but only 4-6% are informative. Opitz et al (2021) also show that state-of-the-art fine-tuned transformer language models processing plain premises tend to generate conclusions lacking in novelty or validity, and proposed ways to assess their novelty and validity using AMR-based similarity metrics. Finally, Gurcke et al (2021) explored whether the sufficiency of conclusions can be assessed with BART, and find problems with insufficient reference conclusions -with ensuing challenges in generating and evaluating valid and novel conclusions.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Previous work has attempted to reconstruct a missing conclusion by identifying the "main target" in the premises (Alshomary et al, 2020). Other work has made use of pretrained sequenceto-sequence transformer language models finetuned on argumentative datasets (Syed et al, 2021;Opitz et al, 2021;Gurcke et al, 2021). However, the question of how to tailor a generated conclusion to a particular frame has not been systematically explored, a gap that we address with this paper.…”
Section: Introductionmentioning
confidence: 99%