2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854191
|View full text |Cite
|
Sign up to set email alerts
|

Learning a semantic parser from spoken utterances

Abstract: Semantic parsers map natural language input into semantic representations. In this paper, we present an approach that learns a semantic parser in the form of a lexicon and an inventory of syntactic patterns from ambiguous training data which is applicable to spoken utterances. We only assume the availability of a task-independent phoneme recognizer, making it easy to adapt to other tasks and yielding no a priori restriction concerning the vocabulary that the parser can process. In spite of these low requiremen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…The algorithm is also applicable to textual input and has been shown to achieve state-of-the-art performance on written input (cf. Gaspers and Cimiano (2014)). The induced parser is represented in the form of a lexicon and an inventory containing syntactic constructions and thus well-suited to be transformed into a rulebased speech recognition grammar.…”
Section: The Applied Semantic Parsing Algorithmmentioning
confidence: 99%
See 3 more Smart Citations
“…The algorithm is also applicable to textual input and has been shown to achieve state-of-the-art performance on written input (cf. Gaspers and Cimiano (2014)). The induced parser is represented in the form of a lexicon and an inventory containing syntactic constructions and thus well-suited to be transformed into a rulebased speech recognition grammar.…”
Section: The Applied Semantic Parsing Algorithmmentioning
confidence: 99%
“…Hence, we explore a 4-fold cross-validation scenario in which for each fold learning is performed using the written ambiguous training data for three games, while the spoken gold standard of the forth game is used for testing, i.e. for performing both speech recognition and subsequent parsing of the resulting ASR transcriptions; spoken data are the same as in Gaspers and Cimiano (2014). For application with the ASR we normalized training data which mainly comprised lowercasing and replacement of numbers in player names, e.g.…”
Section: Learning Scenario and Input Datamentioning
confidence: 99%
See 2 more Smart Citations