Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1182
|View full text |Cite
|
Sign up to set email alerts
|

The Meaning Factory at SemEval-2016 Task 8: Producing AMRs with Boxer

Abstract: We participated in the shared task on meaning representation parsing (Task 8 at SemEval-2016) with the aim of investigating whether we could use Boxer, an existing open-domain semantic parser, for this task. However, the meaning representations produced by Boxer, Discourse Representation Structures, are considerably different from Abstract Meaning Representations, AMRs, the target meaning representations of the shared task. Our hybrid conversion method (involving lexical adaptation as well as post-processing … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 9 publications
0
14
0
Order By: Relevance
“…Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al (2017), Brandt et al (2016), Puzikov et al (2016), and Goodman et al (2016. Artzi et al (2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al (2016). Pust et al (2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al, 2014), and employing several external semantic resources.…”
Section: Related Workmentioning
confidence: 99%
“…Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al (2017), Brandt et al (2016), Puzikov et al (2016), and Goodman et al (2016. Artzi et al (2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al (2016). Pust et al (2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al, 2014), and employing several external semantic resources.…”
Section: Related Workmentioning
confidence: 99%
“…This information is passed into an adapted Tarskian satisfaction relation for a Dynamic Semantics that is used to transform a syntactic parse into a predicate logic based meaning representation, followed by conversion to the required Penman notation. (Bjerva et al, 2016) This team employed an existing open-domain semantic parser, Boxer (Curran et al, 2007), which produces semantic representations based on Discourse Representation Theory. As the meaning representations produced by Boxer are considerably different from AMRs, the team used a hybrid conversion method to map Boxer's output to AMRs.…”
Section: Other Approachesmentioning
confidence: 99%
“…Training For the optimization of the accuracy prediction model we use only the development and training sections of LDC2015E86 and the corresponding automatic parses together with the gold scores. Details on the training cycle can be found in the Supplemental Material §A (the loss is de-DynamicPower (Butler, 2016), TMF (Bjerva et al, 2016), UCL+Sheffield (Goodman et al, 2016) and CU-NLP (Foland and Martin, 2016). 7 TMF-1 and TMF-2 (van Noord and Bos, 2017a), DAN-GNT (Nguyen and Nguyen, 2017), Oxford (Buys and Blunsom, 2017), RIGOTRIO (Gruzitis et al, 2017) and JAMR (Flanigan et al, 2016) 8 https://spacy.io/ scribed in §4).…”
Section: Methodsmentioning
confidence: 99%