PsycEXTRA Dataset 2011
DOI: 10.1037/e578902012-095
|View full text |Cite
|
Sign up to set email alerts
|

Defining next-generation multi-modal communication in human robot interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…While machines currently possess the ability to detect certain implicit cues; e.g., via the recognition of facial expressions ( Picard et al, 2001 ), they are limited in their ability to detect contextual cues. For example, because coordination involves a complex and varying presentation of implicit communication cues ( Lackey et al, 2011 ), it is difficult and expensive to support machine cue perception on a human level. Detection, interpretation, and reasoning about these cues from a human perspective ( Baker et al, 2011 ) is imperative to ensure effective coordination.…”
Section: Gaps In Machine Competencies For Hmtmentioning
confidence: 99%
“…While machines currently possess the ability to detect certain implicit cues; e.g., via the recognition of facial expressions ( Picard et al, 2001 ), they are limited in their ability to detect contextual cues. For example, because coordination involves a complex and varying presentation of implicit communication cues ( Lackey et al, 2011 ), it is difficult and expensive to support machine cue perception on a human level. Detection, interpretation, and reasoning about these cues from a human perspective ( Baker et al, 2011 ) is imperative to ensure effective coordination.…”
Section: Gaps In Machine Competencies For Hmtmentioning
confidence: 99%
“…Many advancements have been made to overcome performance issues with regard to robot teleoperation to improve processes such as robot responsiveness and camera video bandwidth (Chen et al, 2007). In addition, further advances in technology and artificial intelligence bring more autonomous capabilities, including enhanced perception and object recognition to situation assessment and decisionmaking (Barnes et al, 2014;Schuster et al, 2013), multi-robot multioperator scenarios (Chen and Barnes, in press;Fincannon et al, 2011), individual differences (Chen, 2011), human-robot trust issues (Hancock et al, 2011), supervisory control (Chen and Barnes, 2012;Chen et al, 2011), and multimodal/bidirectional communications (Lackey et al, 2011), to better support more autonomous robots. As robots become more autonomous, their participation in combat situations expands.…”
Section: Introductionmentioning
confidence: 99%
“…A number of efforts in Human-Robot Interaction (HRI) are focused on transforming the common perception of robots as tools to robots viewed as teammates, collaborators, or partners (e.g., Fiore, Elias, Gallagher, & Jentsch, 2008;Hoffman & Breazeal, 2004;Lackey, Barber, Reinerman-Jones, Badler, & Hudson, 2011;Phillips, Ososky, & Jentsch, 2011). Though the development of social-cognitive mechanisms has received significantly less emphasis in HRI, it is essential for effective human-robot teaming, as such mechanisms allows robots to function naturally and intuitively during their interactions with humans (Breazeal, 2004).…”
Section: Introductionmentioning
confidence: 99%