2015
DOI: 10.1007/978-3-319-25789-1_9
|View full text |Cite
|
Sign up to set email alerts
|

Combining Continuous Word Representation and Prosodic Features for ASR Error Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
15
0
1

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
2
1

Relationship

6
2

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 13 publications
1
15
0
1
Order By: Relevance
“…This hypothesis is evaluated by the softmax value of the Correct label scored with the MS-MLP. Experiments have shown that this is a calibrated confidence measure more effective than word posterior probability when comparison is based on the Normalized Cross Entropy (NCE) [15], which measures the information contribution provided by confidence knowledge. Table 1 shows the NCE values obtained by these two confidence measures on the MEDIA test data whose details can be found in section 5.…”
Section: Asr Error Detection and Confidence Measurementioning
confidence: 99%
“…This hypothesis is evaluated by the softmax value of the Correct label scored with the MS-MLP. Experiments have shown that this is a calibrated confidence measure more effective than word posterior probability when comparison is based on the Normalized Cross Entropy (NCE) [15], which measures the information contribution provided by confidence knowledge. Table 1 shows the NCE values obtained by these two confidence measures on the MEDIA test data whose details can be found in section 5.…”
Section: Asr Error Detection and Confidence Measurementioning
confidence: 99%
“…Hence, they can capture different types of information: semantic, syntactic, etc. In our previous studies [5,11], we evaluated different kinds of word embeddings, including word2vecf on dependency trees [12], skip-gram provided by word2vec [13], and GloVe [14]. These evaluations were carried on ASR error detection, natural language processing, analogical and similarity tasks.…”
Section: Asr Error Detection Systemmentioning
confidence: 99%
“…We revealed that the combination of word embeddings through auto-encoder yields the best results compared to the other combination approaches (PCA and simple concatenation). Based on the results of these studies, we propose to use the best word embeddings (the three ones cited above) retained from the evaluation task [11] and to combine them with auto-encoder as in [5]. A detailed description of the word embeddings and the combination approaches is presented in [11,5].…”
Section: Asr Error Detection Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…Dans ces travaux (Tam et al, 2014;Ogawa & Hori, 2017), différentes architectures neuronales ont été exploitées : perceptrons multi-couches, réseaux de neurones récurrents, etc. Dans nos études précédentes (Ghannay et al, 2015c(Ghannay et al, , 2016a, nous avons étudié l'utilisation de différents types d'embeddings de mot. Dans (Ghannay et al, 2015c), nous avons proposé une approche neuronale pour la détection d'erreurs dans les transcriptions automatiques et pour la calibration des mesures de confiance issues d'un SRAP.…”
Section: Introductionunclassified