Handbook of Linguistic Annotation 2017
DOI: 10.1007/978-94-024-0881-2_18
|View full text |Cite
|
Sign up to set email alerts
|

The Groningen Meaning Bank

Abstract: What would be a good method to provide a large collection of semantically annotated texts with formal, deep semantics rather than shallow? In this talk I will argue that (i) a bootstrapping approach comprising state-of-the-art NLP tools for semantic parsing, in combination with (ii) a wiki-like interface for collaborative annotation of experts, and (iii) a game with a purpose for crowdsourcing, are the starting ingredients for fulfilling this enterprise. The result, known as the Groningen Meaning Bank, is a se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
91
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 94 publications
(91 citation statements)
references
References 39 publications
0
91
0
Order By: Relevance
“…It must be noted, however, that none of the presented machinery or results hinges upon this choice-it primarily serves the controlled investigation of the model's performance. In future work, we aim to investigate different ways of deriving a DSS from empirical data, for example, by making use of semantically annotated corpora (e.g., the Groningen Meaning Bank; Bos, Basile, Evang, Venhuizen, & Bjerva, 2017), or crowd-sourced data describing world knowledge (see, e.g., Elman & McRae, 2017).…”
Section: Evaluation Of the Dss-derived Meaning Representationsmentioning
confidence: 99%
“…It must be noted, however, that none of the presented machinery or results hinges upon this choice-it primarily serves the controlled investigation of the model's performance. In future work, we aim to investigate different ways of deriving a DSS from empirical data, for example, by making use of semantically annotated corpora (e.g., the Groningen Meaning Bank; Bos, Basile, Evang, Venhuizen, & Bjerva, 2017), or crowd-sourced data describing world knowledge (see, e.g., Elman & McRae, 2017).…”
Section: Evaluation Of the Dss-derived Meaning Representationsmentioning
confidence: 99%
“…The Alexa ontology expands schema.org to cover types, properties and roles used in spoken language. Semantic parsing has been investigated in the content of small domain-specific datasets such as GeoQuery (Wong and Mooney, 2006) and in the context of larger broad-coverage representations such as the Groningen Meaning Bank (GMB) (Bos et al, 2017), the Abstract Meaning Representation (AMR) (Banarescu et al, 2013), UCCA (Abend and Rappoport, 2013), PropBank (Kingsbury and Palmer, 2002), Raiment (Baker et al, 1998) and lambda-DCS (Kingsbury and Palmer, 2002). OntoNotes (Hovy et al, 2006), lambda-DCS s (Liang, 2013) (Baker et al, 1998), FrameNet (Baker et al, 1998), combinatory categorial grammars (CCG) (Steedman and Baldridge, 2011) (Hockenmaier and Steedman, 2007), universal dependencies (Nivre et al, 2016) are all related representations.…”
Section: Related Workmentioning
confidence: 99%
“…2.1 and Tab. 1) we utilize 1) all components of the W-NUT 2016 Twitter NER shared task (Strauss et al, 2016), 2) all components of the 2003 CONLL NER shared task (Tjong Kim Sang and De Meulder, 2003), 3) the WikiNER annotations (Nothman et al, 2008(Nothman et al, , 2012, and 4) the Groningen Meaning Bank (Bos et al, 2017). Each corpus required mapping its entity types to the six 2017 shared task types, and for data sets (2), (3), and (4), only mappings for the location and person types were deemed appropriate (geo-loc, facility, and loc to location, and per to person).…”
Section: Gold-standard Datamentioning
confidence: 99%