Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2015
DOI: 10.3115/v1/n15-1088
|View full text |Cite
|
Sign up to set email alerts
|

Semantic parsing of speech using grammars learned with weak supervision

Abstract: Semantic grammars can be applied both as a language model for a speech recognizer and for semantic parsing, e.g. in order to map the output of a speech recognizer into formal meaning representations. Semantic speech recognition grammars are, however, typically created manually or learned in a supervised fashion, requiring extensive manual effort in both cases. Aiming to reduce this effort, in this paper we investigate the induction of semantic speech recognition grammars under weak supervision. We present empi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…In [2] the re-ranking of the ASR hypotheses using an in-domain LM and a semantic parser significantly improves the accuracy of the transcription and semantic understanding. Furthermore, [6] introduce semantic grammars applicable for ASR and understanding using ambiguous context information.…”
Section: Introductionmentioning
confidence: 99%
“…In [2] the re-ranking of the ASR hypotheses using an in-domain LM and a semantic parser significantly improves the accuracy of the transcription and semantic understanding. Furthermore, [6] introduce semantic grammars applicable for ASR and understanding using ambiguous context information.…”
Section: Introductionmentioning
confidence: 99%
“…Previous studies have shown that this information can be useful for ASR rescoring. The integration of semantic frames and target words in the recurrent neural network LM [1], the use of an in-domain LM and a semantic parser [2], the introduction of the semantic grammars with ambiguous context information [6] improve the accuracy of the transcriptions. Several techniques including subword units, adaptive softmax, and knowledge distillation with a large-scale model to train Transformer LMs are proposed in [8].…”
Section: Introductionmentioning
confidence: 99%
“…For example, exploring the topic and semantic context to enable the recovery of proper names [14], using a semantic language model based on the theory of frame semantics [2], assigning semantic category labels to entire utterances and re-ranking the N-best list of ASR [11]. [7] learns semantic grammar for the ASR system. In [5] authors combine information from the semantic parser and ASR's language model for re-ranking.…”
Section: Introductionmentioning
confidence: 99%