Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1166
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2016 Task 8: Meaning Representation Parsing

Abstract: In this report we summarize the results of the SemEval 2016 Task 8: Meaning Representation Parsing. Participants were asked to generate Abstract Meaning Representation (AMR) (Banarescu et al., 2013) graphs for a set of English sentences in the news and discussion forum domains. Eleven sites submitted valid systems. The availability of state-of-the-art baseline systems was a key factor in lowering the bar to entry; many submissions relied on CAMR (Wang et al., 2015b; Wang et al., 2015a) as a baseline system and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
21
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 53 publications
(22 citation statements)
references
References 19 publications
1
21
0
Order By: Relevance
“…Our results for DM and PSD are quite close to the original results reported in (Peng et al, 2017). Original SMATCH scores are reported in (May, 2016). The score reported on the LPPS subset is close to the original score, whereas the score measured on the whole test set is much lower.…”
Section: Resultssupporting
confidence: 86%
“…Our results for DM and PSD are quite close to the original results reported in (Peng et al, 2017). Original SMATCH scores are reported in (May, 2016). The score reported on the LPPS subset is close to the original score, whereas the score measured on the whole test set is much lower.…”
Section: Resultssupporting
confidence: 86%
“…The raw data for the synsets comes from existing resources such as Princeton WordNet for English (Fellbaum 1998), Svenska OrdNät (Viberg et al 2002) and WordNet-SALDO Following a similar task in text-to-AMR parsing (May 2016), a recent shared task at SemEval 2017 unveiled the state-of-the-art in AMR-to-text generation (May and Priyadarshi 2017). According to the SemEval 2017 Task 9 evaluation, the convincingly best-performing AMR-to-text generation system (Gruzitis, Gosko, and Barzdins 2017), among the contestants, combines a GF-based generator with the JAMR generator (Flanigan et al 2016), achieving the Trueskill score (human evaluation) of 1.03-1.07 and the BLEU score (automatic evaluation) of 18.82 (May and Priyadarshi 2017).…”
Section: Introductionmentioning
confidence: 99%
“…Being a general DAG parser, TUPA has been shown (Hershcovich et al, 2018a,b) to support other graph-based meaning representations and similar frameworks, including UD (Universal Dependencies; Nivre et al, 2019), which was the focus of CoNLL 2017 and 2018 Shared Tasks (Zeman et al, 2017(Zeman et al, , 2018; AMR (Abstract Meaning Representation; Banarescu et al, 2013), targeted in SemEval 2016 and 2017 Shared Tasks (May, 2016;May and Priyadarshi, 2017); and DM (DELPH-IN MRS Bi-Lexical Dependencies; Ivanova et al, 2012), one of the target representations, among PAS and PSD (Prague Semantic Dependencies; Hajic et al, 2012;, in the SemEval 2014 and 2015 Shared Tasks on SDP (Semantic Dependency Parsing; Oepen et al, , 2015Oepen et al, , 2016. DM is converted from DeepBank , a corpus of hand-corrected parses from LinGO ERG (Copestake and Flickinger, 2000), an HPSG (Pollard and Sag, 1994) using Minimal Recursion Semantics (Copestake et al, 2005).…”
Section: Introductionmentioning
confidence: 99%