2017
DOI: 10.1038/s41598-017-03521-2
|View full text |Cite
|
Sign up to set email alerts
|

Multisensory Perception of Contradictory Information in an Environment of Varying Reliability: Evidence for Conscious Perception and Optimal Causal Inference

Abstract: Two psychophysical experiments examined multisensory integration of visual-auditory (Experiment 1) and visual-tactile-auditory (Experiment 2) signals. Participants judged the location of these multimodal signals relative to a standard presented at the median plane of the body. A cue conflict was induced by presenting the visual signals with a constant spatial discrepancy to the other modalities. Extending previous studies, the reliability of certain modalities (visual in Experiment 1, visual and tactile in Exp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
16
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 42 publications
2
16
0
Order By: Relevance
“…Statistically significant results indicate that when emotional information is incongruous across the auditory and visual modalities, the likelihood that people accurately recognize the emotional expression of the robot is significantly decreased, compared to the situation where congruent information is presented across the two channels. Our findings are in line with previous work in psychology, neuroscience and HCI literature and suggest that theories about MI in Human-Human interactions (e.g., [34][35][36][37][38][39][40][41][42][43][44][45]) and Human-Agent interactions (e.g., [26][27][28][29][30] ), also extend to Human-Robot interactions. The descriptive analysis of the emotion recognition scores revealed that emotional expressions that contained a happy body and a sad voice (or vice versa) resulted in a confused perception, where the emotional expression of the robot was perceived by some people as happiness and by others as sadness.…”
Section: Overview and Significance Of Resultssupporting
confidence: 91%
See 1 more Smart Citation
“…Statistically significant results indicate that when emotional information is incongruous across the auditory and visual modalities, the likelihood that people accurately recognize the emotional expression of the robot is significantly decreased, compared to the situation where congruent information is presented across the two channels. Our findings are in line with previous work in psychology, neuroscience and HCI literature and suggest that theories about MI in Human-Human interactions (e.g., [34][35][36][37][38][39][40][41][42][43][44][45]) and Human-Agent interactions (e.g., [26][27][28][29][30] ), also extend to Human-Robot interactions. The descriptive analysis of the emotion recognition scores revealed that emotional expressions that contained a happy body and a sad voice (or vice versa) resulted in a confused perception, where the emotional expression of the robot was perceived by some people as happiness and by others as sadness.…”
Section: Overview and Significance Of Resultssupporting
confidence: 91%
“…The results suggest strong bidirectional links between emotion detection processes in vision and audition. Additionally, there is accumulating evidence that integration of different modalities, when they are congruent and synchronous, leads to a significant increase in emotion recognition accuracy [41]. However, when information is incongruent across different sensory modalities, integration may lead to a biased percept, and emotion recognition accuracy is impaired [41].…”
Section: Multisensory Interaction (Mi) Research In Psychology and Neumentioning
confidence: 99%
“…Causal inference formalizes this process as a normative statistically optimal computation that is fundamental for sensory perception and cognition (1)(2)(3)(4)(5). The causal inference framework has previously been used to explain processing of multimodal signals, including auditory-visual (1,(4)(5)(6)(7)(8)(9), visual-speech (10,11), visualvestibular (12,13), and visual-tactile interactions (14,15). However, its effectiveness in framing more general problems of sensory perception within the visual domain has remained unclear.…”
mentioning
confidence: 99%
“…To efficiently recalibrate, the perceptual system must infer whether the discrepancy between two sensory cues is due to sensory inaccuracies or whether the cues simply reflect distinct sources. Ideally, recalibration should only occur when a discrepancy can be attributed to sensory inaccuracies (Mahani et al, 2017). We argue that during bimodal trials the VE might have decreased when feedback was based on audition relative to when feedback was based on vision due to a decreased binding tendency which manifests in a reduced prior probability of a common cause (Körding et al, 2007).…”
Section: Ventriloquism Aftereffectmentioning
confidence: 85%