2013
DOI: 10.1016/j.neuroimage.2012.09.054
|View full text |Cite
|
Sign up to set email alerts
|

Vision holds a greater share in visuo-haptic object recognition than touch

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

4
46
0
2

Year Published

2013
2013
2021
2021

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 40 publications
(52 citation statements)
references
References 38 publications
(68 reference statements)
4
46
0
2
Order By: Relevance
“…Since vision provides information about several object features in parallel and even if the object is outside the reaching space, there might be an overall dominance of vision in object recognition, at least if objects have to be recognized predominantly based on their shape. In line with this notion, we have recently found an asymmetry in the processing of crossmodal information during visual and haptic object recognition (Kassuba et al, 2013a). Using a visuo-haptic delayed-match-to-sample task during functional magnetic resonance imaging (fMRI), the direction of delayed matching (visual-haptic vs. haptic-visual) influenced the activation profiles in bilateral LO, FG, anterior (aIPS) and posterior intraparietal sulcus (pIPS), that is, in regions which have previously been associated with visuo-haptic object integration (Grefkes et al, 2002; Saito et al, 2003; Stilla and Sathian, 2008; Kassuba et al, 2011; for review see Lacey and Sathian, 2011).…”
Section: Introductionsupporting
confidence: 69%
See 2 more Smart Citations
“…Since vision provides information about several object features in parallel and even if the object is outside the reaching space, there might be an overall dominance of vision in object recognition, at least if objects have to be recognized predominantly based on their shape. In line with this notion, we have recently found an asymmetry in the processing of crossmodal information during visual and haptic object recognition (Kassuba et al, 2013a). Using a visuo-haptic delayed-match-to-sample task during functional magnetic resonance imaging (fMRI), the direction of delayed matching (visual-haptic vs. haptic-visual) influenced the activation profiles in bilateral LO, FG, anterior (aIPS) and posterior intraparietal sulcus (pIPS), that is, in regions which have previously been associated with visuo-haptic object integration (Grefkes et al, 2002; Saito et al, 2003; Stilla and Sathian, 2008; Kassuba et al, 2011; for review see Lacey and Sathian, 2011).…”
Section: Introductionsupporting
confidence: 69%
“…Specifically, we applied real or sham (non-effective) offline 1 Hz rTMS to the left LO immediately before subjects performed a visuo-haptic delayed-match-to-sample task during fMRI. The published results reported above (Kassuba et al, 2013a) present the results after sham rTMS, the current paper focuses on how these multisensory interaction effects were modulated by real rTMS. During fMRI, a visual or haptic sample object (S1) and a visual or haptic target object (S2) were presented sequentially, and subjects had to indicate whether the identity of both objects was the same (congruent) or not (incongruent).…”
Section: Introductionmentioning
confidence: 92%
See 1 more Smart Citation
“…A recent study suggests that underlying neural activity is asymmetric between the two crossmodal conditions. Using a match-to-sample task, Kassuba et al (2013) showed that bilateral lateral occipital complex (LOC), fusiform gyrus (FG), and anterior intraparietal sulcus (aIPS) selectively responded more strongly to crossmodal, compared to unimodal, object matching when haptic targets followed visual samples, and more strongly still when the haptic target and visual sample were congruent rather than incongruent; however, these regions showed no such increase for visual targets in either crossmodal or unimodal conditions. This asymmetric increase in activation in the visual-haptic condition may reflect multisensory binding of shape information and suggests that haptics – traditionally seen as the less reliable modality – has to integrate previously presented visual information more than vision has to integrate previous haptic information (Kassuba et al, 2013).…”
Section: Haptic and Visuo-haptic Object Recognitionmentioning
confidence: 99%
“…The accuracy of the memory association depends on the intensity of the cumulated multisensory experience or knowledge. Besides, as vision is believed to be more effective than other senses (especially touch) in processing several object features in parallel and exploring objects more efficiently (more accurately and more rapidly) (Jones and ONeil 1985;Jones 1981;Kassuba et al 2013), much research has been found on investigating the possibility to perceive tactile information through vision. For example, Klatzky and Lederman (2010) stated in their review about multisensory perception of objects' texture properties that some aspects of texture can be represented by touch and vision without evident bias detected between different sensory modalities.…”
Section: Introductionmentioning
confidence: 99%