Interspeech 2017 2017
DOI: 10.21437/interspeech.2017-518
|View full text |Cite
|
Sign up to set email alerts
|

Towards Zero-Shot Frame Semantic Parsing for Domain Scaling

Abstract: State-of-the-art slot filling models for goal-oriented human/machine conversational language understanding systems rely on deep learning methods. While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems. This paper proposes a deep learning based ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
149
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 110 publications
(149 citation statements)
references
References 12 publications
0
149
0
Order By: Relevance
“…In order to compare our method against the state-of-the-art models, we compare against the models presented in (Jha et al 2018), including the BoE models and their non-BoE variants. We also compare our method with another zeroshot model for slot tagging (Bapna et al 2017) in domain adaptation settings.…”
Section: Baseline Systemsmentioning
confidence: 99%
“…In order to compare our method against the state-of-the-art models, we compare against the models presented in (Jha et al 2018), including the BoE models and their non-BoE variants. We also compare our method with another zeroshot model for slot tagging (Bapna et al 2017) in domain adaptation settings.…”
Section: Baseline Systemsmentioning
confidence: 99%
“…introduce a variational auto-encoding model for semi-supervised semantic parsing. Beyond semantic parsing, the semisupervised and adaptive learnings are also typical in natural language understanding (Tur et al, 2005;Bapna et al, 2017;Zhu et al, 2014.…”
Section: Related Workmentioning
confidence: 99%
“…However, they still have problem with new act and slot. Bapna et al (2017); Lee and Jha (2018); Zhu and Yu (2018) try to solve it with textual slot descriptions. In this paper, we propose atomic templates to describe act-slot-value triples but not separate slots or acts.…”
Section: Related Workmentioning
confidence: 99%