7th European Conference on Speech Communication and Technology (Eurospeech 2001) 2001
DOI: 10.21437/eurospeech.2001-500
|View full text |Cite
|
Sign up to set email alerts
|

Is this conversation on track?

Abstract: Confidence annotation allows a spoken dialog system to accurately assess the likelihood of misunderstanding at the utterance level and to avoid breakdowns in interaction. We describe experiments that assess the utility of features from the decoder, parser and dialog levels of processing. We also investigate the effectiveness of various classifiers, including Bayesian Networks, Neural Networks, SVMs, Decision Trees, AdaBoost and Naive Bayes, to combine this information into an utterancelevel confidence metric. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
2
0
3

Year Published

2002
2002
2007
2007

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(5 citation statements)
references
References 9 publications
0
2
0
3
Order By: Relevance
“…We obtained whole utterance manual annotations for the period of October through November 1999 and partial utterance annotations for the period of mid June through mid August of 2001. (See [3] for further information on the annotation scheme.) The resulting sub-corpora were used to locate problem regions in dialog and to drive learning-based experiments (e.g., [4]).…”
Section: Additional Annotationmentioning
confidence: 99%
“…We obtained whole utterance manual annotations for the period of October through November 1999 and partial utterance annotations for the period of mid June through mid August of 2001. (See [3] for further information on the annotation scheme.) The resulting sub-corpora were used to locate problem regions in dialog and to drive learning-based experiments (e.g., [4]).…”
Section: Additional Annotationmentioning
confidence: 99%
“…In state-of-the-art, elaborate confidence measures however, many other information sources are exploited as well. Examples are the phoneme duration [1,2], properties of the search during the recognition and of the resulting word graph [3,4], the distance between phoneme strings obtained from word recognition and from phoneme recognition [5], the speaking rate [6], the prosody pattern of the sentence [1], sentence parsing [7] and the dialogue manager [7].…”
Section: Introductionmentioning
confidence: 99%
“…Para ver una descripción más detallada del servicio se puede consultar el apéndice A. Algunos trabajos de evaluación de medidas de confianza en este dominio son los realizados en la Universidad Camegie Mellon (Zhang y Rudnicky, 2001;Jiang et al 2001) y en la Universidad de Colorado (San-Segundo et al, 2000a;San-Segundo et al, 2001a;Hacioglu y Ward, 2002;Sameer y Ward, 2002) que proponen medidas al nivel de palabra, concepto y fi-ase. Por último, cabe comentar el trabajo realizado por Carpenter (Carpenter et al, 2001), donde se proponen parámetros del diálogo para evaluar la calidad de la interacción.…”
Section: Pág 2-8unclassified
“…Por último, cabe comentar que en (Carpenter et al, 2001;San-Segundo et al, 200le) podemos ver varias propuestas para el análisis automático on-line de la calidad de la interacción en los diálogos usuario-sistema.…”
Section: Pág 2-9unclassified
See 1 more Smart Citation