Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction 2008
DOI: 10.1145/1349822.1349849
|View full text |Cite
|
Sign up to set email alerts
|

Integrating vision and audition within a cognitive architecture to track conversations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
26
0

Year Published

2009
2009
2016
2016

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(27 citation statements)
references
References 32 publications
1
26
0
Order By: Relevance
“…This may be a simple and natural gesture for making the robot respond rapidly to the detection of a sound before ManyEars can provide a location for the detected sound. Such an addition would be in line with the observation that external observers feel that an attentive robot, which moves its head toward the interlocutor in a two-person, turn-taking conversation, is considered to be more natural than a distracted robot (moving its head with a 500 ms delay) (Trafton, Bugajska, Fransen, & Ratwani, 2008). Finally, a participant commented that he would have said a much lower delay was unacceptable if he had been trying to disturb a person instead of a robot; he explained that he did not really expect contemporary robots to react to his interruptions.…”
Section: Computing Resource Management Feasibility Studysupporting
confidence: 55%
“…This may be a simple and natural gesture for making the robot respond rapidly to the detection of a sound before ManyEars can provide a location for the detected sound. Such an addition would be in line with the observation that external observers feel that an attentive robot, which moves its head toward the interlocutor in a two-person, turn-taking conversation, is considered to be more natural than a distracted robot (moving its head with a 500 ms delay) (Trafton, Bugajska, Fransen, & Ratwani, 2008). Finally, a participant commented that he would have said a much lower delay was unacceptable if he had been trying to disturb a person instead of a robot; he explained that he did not really expect contemporary robots to react to his interruptions.…”
Section: Computing Resource Management Feasibility Studysupporting
confidence: 55%
“…Roy et al used POMDP's to take account of speech recognition errors in state-based transitions of dialogue [14,15]. For social robots, many architectures for cognitive processing have been developed [16,17]. The BIRON system has used state-transition models [18] and common-ground theory [19] to direct dialogue.…”
Section: ) Dialogue Management In Roboticsmentioning
confidence: 99%
“…1, ventures beyond traditional computer displays and mouse/keyboard manipulation to establish embodied presence by first and foremost extending the representation of the visual and aural modules to enable 3D object and sound localization [28,57]. We also extended ACT-R's capabilities to incorporate a locomotion faculty (the "moval" module) and a cognitive-map based spatial reasoning capability (the "spatial" module).…”
Section: Cognitive Architecture and Robot Controlmentioning
confidence: 99%