Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction 2008
DOI: 10.1145/1349822.1349861
|View full text |Cite
|
Sign up to set email alerts
|

The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue

Abstract: Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; how… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
19
0
2

Year Published

2009
2009
2022
2022

Publication Types

Select...
6
3
1

Relationship

4
6

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 13 publications
0
19
0
2
Order By: Relevance
“…Addition-HUMAN 1: And I'll get this HUMAN 1: And then the red one HUMAN 2: 'Kay I've got the yellow HUMAN 1: Cool Fig. 2 Example of a real dialogue between two humans during a joint construction task ally, as Foster et al showed in [12], many spoken expressions that refer to objects in the world can only be understood together with the gestures that accompany the spoken part of the message. For example, Foster et al present a real dialogue from a joint construction task, in which two humans assemble tangram models together.…”
Section: Speech Processingmentioning
confidence: 99%
“…Addition-HUMAN 1: And I'll get this HUMAN 1: And then the red one HUMAN 2: 'Kay I've got the yellow HUMAN 1: Cool Fig. 2 Example of a real dialogue between two humans during a joint construction task ally, as Foster et al showed in [12], many spoken expressions that refer to objects in the world can only be understood together with the gestures that accompany the spoken part of the message. For example, Foster et al present a real dialogue from a joint construction task, in which two humans assemble tangram models together.…”
Section: Speech Processingmentioning
confidence: 99%
“…Note that this slowness is not unique to our robot. For instance, Foster et al report that it takes a few seconds or more for their robot to react autonomously to users at the moment [10]. Thus, finding the upper boundary for which users will wait is important.…”
Section: Why Does Response Time Matter?mentioning
confidence: 99%
“…Studies in this area have a long history in computational linguistics/semantics (e.g., Claassen (1992); Krahmer and van der Sluis (2003)), human-robot interaction (e.g., Kelleher and Kruijff (2006);Foster et al (2008)), and computational and human discourse studies (e.g., Bortfeld and Brennan (1997); Funakoshi et al (2004); Viethen and Dale (2008)). Following these, we seek to build models for generating, recognizing, and classifying referring expressions that are both natural and useful to the human interlocutors of computational dialogue systems.…”
Section: Introductionmentioning
confidence: 99%