2019
DOI: 10.48550/arxiv.1901.10680
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Effective weakly supervised semantic frame induction using expression sharing in hierarchical hidden Markov models

Abstract: We present a framework for the induction of semantic frames from utterances in the context of an adaptive command-and-control interface. The system is trained on an individual user's utterances and the corresponding semantic frames representing controls. During training, no prior information on the alignment between utterance segments and frame slots and values is available. In addition, semantic frames in the training data can contain information that is not expressed in the utterances. To tackle this weakly … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…This paper explores two different retraining techniques: inductive sequence labeling (Section 4.1) and stochastic context-free grammar induction (Section 4.2). As described in van de Loo et al (2019), FramEngine can be shown to exhibit a more than decent performance on the task of frame-slot filling for the PATCOR data. For retraining purposes, we can use FramEngine to decode its own training set, linking each word in the training set with a frame slot/value label through Viterbi decoding of the utterance.…”
Section: Retrainingmentioning
confidence: 90%
See 3 more Smart Citations
“…This paper explores two different retraining techniques: inductive sequence labeling (Section 4.1) and stochastic context-free grammar induction (Section 4.2). As described in van de Loo et al (2019), FramEngine can be shown to exhibit a more than decent performance on the task of frame-slot filling for the PATCOR data. For retraining purposes, we can use FramEngine to decode its own training set, linking each word in the training set with a frame slot/value label through Viterbi decoding of the utterance.…”
Section: Retrainingmentioning
confidence: 90%
“…The training partitions in the P9 set are used incrementally as training data for FramEngine. Optimal parameters and information sources for Fra-mEngine were obtained during expansive experiments on the P1-8 (van de Loo et al, 2019). The trained hierarchical hidden markov model is then used to transcode the training utterances into slot-value tag sequences.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations