Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1224
|View full text |Cite
|
Sign up to set email alerts
|

A Crowdsourced Frame Disambiguation Corpus with Ambiguity

Abstract: We present a resource for the task of FrameNet semantic frame disambiguation of over 5,000 word-sentence pairs from the Wikipedia corpus. The annotations were collected using a novel crowdsourcing approach with multiple workers per sentence to capture interannotator disagreement. In contrast to the typical approach of attributing the best single frame to each word, we provide a list of frames with disagreement-based scores that express the confidence with which each frame applies to the word. This is based on … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 15 publications
0
15
0
Order By: Relevance
“…Similar approaches have been applied in parsing (Martínez Alonso et al, 2015) and supersense tagging (Martínez Alonso et al, 2016). Specifically relevant to this work is past discussion of disagreement on semantic annotation tasks, including anaphora resolution (Poesio and Artstein, 2005), coreference (Versley, 2008;Recasens et al, 2011), word sense disambiguation (Erk and McCarthy, 2009;Passonneau et al, 2012;Jurgens, 2013), veridicality (Geis andZwicky, 1971;Karttunen et al, 2014;de Marneffe et al, 2012), semantic frames (Dumitrache et al, 2019), and grounding (Reidsma and op den Akker, 2008).…”
Section: Related Workmentioning
confidence: 99%
“…Similar approaches have been applied in parsing (Martínez Alonso et al, 2015) and supersense tagging (Martínez Alonso et al, 2016). Specifically relevant to this work is past discussion of disagreement on semantic annotation tasks, including anaphora resolution (Poesio and Artstein, 2005), coreference (Versley, 2008;Recasens et al, 2011), word sense disambiguation (Erk and McCarthy, 2009;Passonneau et al, 2012;Jurgens, 2013), veridicality (Geis andZwicky, 1971;Karttunen et al, 2014;de Marneffe et al, 2012), semantic frames (Dumitrache et al, 2019), and grounding (Reidsma and op den Akker, 2008).…”
Section: Related Workmentioning
confidence: 99%
“…These areas have developed specialized training and evaluation methods (Papineni et al, 2002;Lin, 2004). More surprisingly, disagreements in interpretation have been found to be frequent in annotation projects concerned with apparently more 'objective' aspects of language, such as coreference (Poesio and Artstein, 2005;Recasens et al, 2011), part-of-speech tagging , word sense disambiguation (Passonneau et al, 2012) and semantic role labelling (Dumitrache et al, 2019), to name a few examples. Even if in these tasks individual instances can be found to be reasonably objective, these findings appear to reflect the existence of extensive and systematic disagreement on what can be concluded from a natural language statement (Pavlick and Kwiatkowski, 2019).…”
Section: Disagreement In 'Objective' Tasksmentioning
confidence: 99%
“…A range of previous studies have explored methods of crowdsourcing SRL. Most work has focused on crowd-only workflows, with comparatively low accuracy or extensive worker training (Fossati et al, 2013;Feizabadi and Padó, 2014;Chang et al, 2015;Dumitrache et al, 2019;Hahm et al, 2020). This work guided our user interface designs and our understanding of challenges in SRL annotation.…”
Section: Related Workmentioning
confidence: 99%