2018 IEEE Spoken Language Technology Workshop (SLT) 2018
DOI: 10.1109/slt.2018.8639519
|View full text |Cite
|
Sign up to set email alerts
|

A Re-Ranker Scheme For Integrating Large Scale NLU Models

Abstract: Large scale Natural Language Understanding (NLU) systems are typically trained on large quantities of data, requiring a fast and scalable training strategy. A typical design for NLU systems consists of domain-level NLU modules (domain classifier, intent classifier and named entity recognizer). Hypotheses (NLU interpretations consisting of various intent+slot combinations) from these domain specific modules are typically aggregated with another downstream component. The re-ranker integrates outputs from domain-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 24 publications
(34 citation statements)
references
References 16 publications
0
34
0
Order By: Relevance
“…As the next steps, we aim at extending the BranchyNet scheme to the entire stack of SLU models [2]. The BranchyNet scheme can also be combined with other modeling schemes such as model distillation and compression.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…As the next steps, we aim at extending the BranchyNet scheme to the entire stack of SLU models [2]. The BranchyNet scheme can also be combined with other modeling schemes such as model distillation and compression.…”
Section: Resultsmentioning
confidence: 99%
“…Spoken Language Understanding(SLU) systems are core components of voice agents such as Apple's Siri, Amazon Alexa and Google Assistant and can be designed in one of the several ways, such as an end to end modeling scheme [1], or a collection of task specific classifiers [2,3]. For a complex SLU system, the machine learning architecture can be computationally expensive, posing a challenge for applications such as On-Device-SLU.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We used n-grams extracted from training data as features for the models. A detailed description of our NLU system can be found in [25].…”
Section: Methodsmentioning
confidence: 99%
“…We conduct experiments on Natural Language Understanding (NLU) tasks, specifically a domain classification task and an SLU task to obtain domain-intent-entity combinations [10]. Although both multi-class [11,12] and OVA designs [13] exist for such tasks, a comparison between the two systems with asynchrony capabilities added in the OVA setup has not been explored.…”
Section: Introductionmentioning
confidence: 99%