Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction 2012
DOI: 10.1145/2157689.2157840
|View full text |Cite
|
Sign up to set email alerts
|

Tell me when and why to do it!

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…It is often the case that during communication a robot will encounter new words, new objects, and new actions it does not have existing knowledge about. As shown in this article and other recent work (Cantrell et al 2012;Mohan, Kirk, and Laird 2013;Mohseni-Kabir et al 2015;Thomason et al 2016), language and collaborative dialogue play an impor-tant role in enabling the robot to continuously learn grounded meanings, the environment, and tasks from its human partners. To further support interactive robot learning through natural language dialogue, our current work is to develop approaches to ground language to participants of actions in more complex visual scenes (for example, a kitchen environment) (Yang et al 2016.…”
Section: Discussionmentioning
confidence: 81%
“…It is often the case that during communication a robot will encounter new words, new objects, and new actions it does not have existing knowledge about. As shown in this article and other recent work (Cantrell et al 2012;Mohan, Kirk, and Laird 2013;Mohseni-Kabir et al 2015;Thomason et al 2016), language and collaborative dialogue play an impor-tant role in enabling the robot to continuously learn grounded meanings, the environment, and tasks from its human partners. To further support interactive robot learning through natural language dialogue, our current work is to develop approaches to ground language to participants of actions in more complex visual scenes (for example, a kitchen environment) (Yang et al 2016.…”
Section: Discussionmentioning
confidence: 81%
“…A closely related problem is sensing the affective states of human in the loop, and communicating the AI agent's own intentions to the human. This communication can be done in multiple natural modalities including speech and language and gesture recognition (Cantrell et al 2012). The human-AI communication can also be supported with the recent technologies such as augmented reality and brain-computer interfaces.…”
Section: Communicating With Humansmentioning
confidence: 99%
“…Others have created robotic systems that interact using dialog (Cantrell et al, 2012;Dzifcak et al, 2009;Hsiao, Tellex, Vosoughi, Kubat, & Roy, 2008;Skubic et al, 2004). (Bauer et al, 2009) built a robot that can find its way through an urban environment by interacting with pedestrians using a touch screen and gesture recognition system.…”
Section: Related Workmentioning
confidence: 99%