2008 IEEE Spoken Language Technology Workshop 2008
DOI: 10.1109/slt.2008.4777859
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of a spoken dialogue system for controlling a Hifi audio system

Abstract: In this paper a Bayesian Networks, BNs, approach to dialogue modelling [1] is evaluated in terms of a battery of both subjective and objective metrics. A significant effort in improving the contextual information handling capabilities of the system has been done. Consequently, besides typical dialogue measurement rates for usability like task or dialogue completion rates, dialogue time, etc. we have included a new figure measuring the contextuality of the dialogue as the number of turns where contextual inform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2010
2010
2013
2013

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 1 publication
0
9
0
Order By: Relevance
“…It should be noted that the first non-affective evaluation was conducted (using the non-adaptive HiFi agent) with the intention of only measuring the agent's performance (i.e., ability to execute the actions requested by users) (Fernández-Martínez et al, 2008), without forseeing the integration of any social intelligence.…”
Section: The Corpus Usedmentioning
confidence: 99%
See 1 more Smart Citation
“…It should be noted that the first non-affective evaluation was conducted (using the non-adaptive HiFi agent) with the intention of only measuring the agent's performance (i.e., ability to execute the actions requested by users) (Fernández-Martínez et al, 2008), without forseeing the integration of any social intelligence.…”
Section: The Corpus Usedmentioning
confidence: 99%
“…A twofold laboratory-controlled evaluation process aimed at assessing the system both objectively and subjectively was conducted in the past (Fernández-Martínez et al, 2008). In the objective evaluation, metrics that measure the dialog features were automatically collected -a log file is maintained at the end of each dialog session that captures the measurements of each of these metrics described below in Table 1.…”
Section: Metrics Of Mixed-initiative Hifi-av2 Spoken Dialogmentioning
confidence: 99%
“…To model satisfaction we used satisfaction rating as the target and conversational features as predictors, obtained from a corpus collected in a past evaluation [14]. The users involved in the evaluation did not have previous experience in interacting with the HiFi agent, and their participation were not rewarded.…”
Section: Affect Detection Using Satisfaction Ratings (Target) Anmentioning
confidence: 99%
“…User affect can be reflected in the user satisfaction judgment [1,5,12] and the relationship of affect and satisfaction judgment have been empirically proven in [10,11] and also in our work, which will be further described. To model user affect, we used satisfaction rating as the target and conversational features as predictors, obtained from a corpus collected in a past evaluation [7]. What makes our approach different from others is that we used target and predictor variables whose potentials are often ignored to model affect.…”
Section: Automatic Detection Of Affectmentioning
confidence: 99%