Interspeech 2009 2009
DOI: 10.21437/interspeech.2009-697
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing CRFs for SLU tasks in various languages using modified training criteria

Abstract: In this paper, we present improvements of our state-of-the-art concept tagger based on conditional random fields. Statistical models have been optimized for three tasks of varying complexity in three languages (French, Italian, and Polish). Modified training criteria have been investigated leading to small improvements. The respective corpora as well as parameter optimization results for all models are presented in detail. A comparison of the selected features between languages as well as a close look at the t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…These models use both word and classes, and a rich set of lexical features such like word prefixes, suffixes, word capitalization information etc. We note that the large gap between these CRF models is due to the fact that the CRF of [45] is trained with an improved margin criterion, similar to the large margin principle of SVM [46,47]. We note also that comparing significance tests published in [43], a difference of 0.1 in CER is already statistically significant.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 99%
“…These models use both word and classes, and a rich set of lexical features such like word prefixes, suffixes, word capitalization information etc. We note that the large gap between these CRF models is due to the fact that the CRF of [45] is trained with an improved margin criterion, similar to the large margin principle of SVM [46,47]. We note also that comparing significance tests published in [43], a difference of 0.1 in CER is already statistically significant.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 99%
“…CRFs have been successfully used for SLU tasks [4]. We applied CRFs to the MEDIA corpus using the CRF++ toolkit (http://crfpp.googlecode.com/svn/trunk/doc/index.html).…”
Section: Two Different Slu Techniques Have Been Studied a Generative ...mentioning
confidence: 99%
“…We defined a set of basic features that includes only lexical information, setting a window such as incorporates the two previous and the two posterior words. A more complete set of features could be defined for applying the CRFs to SLU tasks [4], however, in this work we have not done a depth study of the best combination of features.…”
Section: Two Different Slu Techniques Have Been Studied a Generative ...mentioning
confidence: 99%
See 1 more Smart Citation