2022
DOI: 10.1016/j.isci.2022.104462
|View full text |Cite
|
Sign up to set email alerts
|

Human but not robotic gaze facilitates action prediction

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
6
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 100 publications
4
6
1
Order By: Relevance
“…We replicated the findings from Experiment 1 with faster RTs when participants attribute intentions to humans rather than the triangle and robots. Moreover, these results expand previous reports where online participants were asked to infer the agents' intentions, and the graspable objects and text bubble were in a fixed position (Experiment 4 in [18]). The findings from Experiment 3 also enable us to exclude the possibility that the results in Experiment 2 were driven by a difference between laboratory and online samples.…”
Section: Discussion Experimentssupporting
confidence: 87%
See 3 more Smart Citations
“…We replicated the findings from Experiment 1 with faster RTs when participants attribute intentions to humans rather than the triangle and robots. Moreover, these results expand previous reports where online participants were asked to infer the agents' intentions, and the graspable objects and text bubble were in a fixed position (Experiment 4 in [18]). The findings from Experiment 3 also enable us to exclude the possibility that the results in Experiment 2 were driven by a difference between laboratory and online samples.…”
Section: Discussion Experimentssupporting
confidence: 87%
“…Experiments 4, 5, and 6 were not pre-registered as we tried to replicate results obtained in Experiment 2 with a smaller sample size and slightly different task instructions. We adopted the same statistical approach of our previous 25.44 ± 0.75, Descriptive measures of the sample for each experiment work [18] to facilitate comparison across all experiments and studies.…”
Section: General Methodologymentioning
confidence: 99%
See 2 more Smart Citations
“…This understanding would allow the robot partner to control its speed, trajectory, and action planning. In HRCom, similar to human communication, multiple input channels called modes exist such as gaze ( Tidoni et al, 2022 ), hand gestures ( Liu and Wang 2018 ), natural language interfaces ( Fogli et al, 2022 ), voice commands ( Bingol and Aydogmus 2020 ), and facial expressions ( Spezialetti et al, 2020 ; Chiurco et al, 2022 ) and much research has been carried out to detect and classify them. These multiple modes theoretically lend redundancy to the systems but are advantageous for industrial settings that are full of noises and disturbances.…”
Section: Recent Advances In Industrial Human-robot Communicationmentioning
confidence: 99%