Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014) 2014
DOI: 10.3115/v1/s14-2006
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2014 Task 6: Supervised Semantic Parsing of Robotic Spatial Commands

Abstract: SemEval-2014 Task 6 aims to advance semantic parsing research by providing a high-quality annotated dataset to compare and evaluate approaches. The task focuses on contextual parsing of robotic commands, in which the additional context of spatial scenes can be used to guide a parser to control a robot arm. Six teams submitted systems using both rule-based and statistical methods. The best performing (hybrid) system scored 92.5% and 90.5% for parsing with and without spatial context. However, the best performin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 19 publications
0
13
0
Order By: Relevance
“…Part of this can be explained by the domain-specific nature of previous works (e.g. focus on spatial commands (Dukes, 2014)). …”
Section: Semantic Parsing and Matchingmentioning
confidence: 99%
See 1 more Smart Citation
“…Part of this can be explained by the domain-specific nature of previous works (e.g. focus on spatial commands (Dukes, 2014)). …”
Section: Semantic Parsing and Matchingmentioning
confidence: 99%
“…In 2014, SemEval hosted a task related to the parsing of natural language spatial commands (Dukes, 2014), also targeting a robotics scenario. More specifically, the task proposed the parsing of commands to move a robot arm that moved objects within a spatial region.…”
Section: Similar Initiativesmentioning
confidence: 99%
“…Therefore, researchers developed unsupervised techniques that learn the semantic categories of words from unlabelled data by exploiting regularities in natural language as in Schütze (1998), Biemann (2009), Socher et al (2012, and Houthooft et al (2016). Similarly, in grammar induction, parsers are commonly trained in a supervised setting on a corpus of annotated grammar trees as presented by Matuszek et al (2013), and Dukes (2014). Other researchers have tackled unsupervised grammar induction from unlabelled sentences as presented by Klein et al (2002) …”
Section: Related Workmentioning
confidence: 99%
“…), and the grammar structure gets annotated using a tree that connects the different words together (e.g. RCL trees) as presented by Dukes (2014) and Matuszek (2013). The manual annotation of data is a labour intensive task that hinders learning from large corpora, and such labels are not necessarily available for all languages.…”
Section: Introductionmentioning
confidence: 99%
“…the syntax trees produced by mainstream PCFG parsers and the DRS produced by the Boxer system (Bos, 2008). As a notable exception, Dukes (2014) presents a novel method to produce RCL output directly.…”
Section: Related Workmentioning
confidence: 99%