2020
DOI: 10.1016/j.cub.2020.04.039
|View full text |Cite
|
Sign up to set email alerts
|

Shared Representation of Visual and Auditory Motion Directions in the Human Middle-Temporal Cortex

Abstract: Highlights d Cross-modal decoding of visual and auditory motion directions in hMT + /V5 d Motion-direction representation is, however, not abstracted from the sensory input d We reveal a multifaced representation of multisensory motion signals in hMT + /V5

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
58
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 36 publications
(60 citation statements)
references
References 83 publications
2
58
0
Order By: Relevance
“…Over the years, similar results were reported by several additional studies documenting in congenitally blind adults, maintained specializations for other computations typically processed in the dorsal "visual" stream -also previously conceived as strictly visual. This was the case of spatial localization in the Middle Occipital Gyrus (MOG) and motion detection in the MT + complex, experienced via both audition or touch (Bedny et al, 2010;Collignon et al, 2011;Dormal et al, 2016;Jiang et al, 2014;Matteau et al, 2010;Poirier et al, 2006;Renier et al, 2010;Ricciardi et al, 2007;Strnad et al, 2013;Wolbers et al, 2011) (see also (Hagen et al, 2002;Rezk et al, 2020) for convergent results in sighted adults) ( Fig. 1).…”
Section: Loosening the Unisensory Constraints On The Emergence Of Anamentioning
confidence: 96%
“…Over the years, similar results were reported by several additional studies documenting in congenitally blind adults, maintained specializations for other computations typically processed in the dorsal "visual" stream -also previously conceived as strictly visual. This was the case of spatial localization in the Middle Occipital Gyrus (MOG) and motion detection in the MT + complex, experienced via both audition or touch (Bedny et al, 2010;Collignon et al, 2011;Dormal et al, 2016;Jiang et al, 2014;Matteau et al, 2010;Poirier et al, 2006;Renier et al, 2010;Ricciardi et al, 2007;Strnad et al, 2013;Wolbers et al, 2011) (see also (Hagen et al, 2002;Rezk et al, 2020) for convergent results in sighted adults) ( Fig. 1).…”
Section: Loosening the Unisensory Constraints On The Emergence Of Anamentioning
confidence: 96%
“…Interestingly, however, the ‘deaf’ A-motion-STC was found to exert a stronger excitatory influence over motion-selective activity in hMT+/V5 compared to its reciprocal from hMT+/V5. A tentative interpretation of this observation might relate to the intrinsic multifaceted representation of motion signals in hMT+/V5 (Rezk et al, 2020) and its potential role as a higher-level hub for motion processing, which might result into higher sensitivity of this region to motion information conveyed by auditory and visual motion-sensitive regions. More specifically, predetermined excitatory afferent connections from A-motion-STC to hMT+/V5 might exceed in quantity and excitatory strength their reciprocal from hMT+/V5 and represent a privileged target for cross-modal recycling of long-range connectivity following early auditory deprivation.…”
Section: Discussionmentioning
confidence: 95%
“…Our observation also suggests a prominent lateralization of the reorganization process for moving stimuli in the right hemisphere of deaf individuals, consistent with the well-known right lateralization of visuo-spatial processing (Corbetta and Shulman, 2002) and the observation that cortical thickness of the right planum temporale is greater in deaf individuals with better visual motion detection thresholds (Shiell et al, 2016). Furthermore, the enhanced multivoxel decoding accuracy, detected in the reorganized right planum temporale of deaf people suggests that this region might be capable of representing discrete motion trajectories, a feature that is typically found in occipital region selective for visual motion like hMT+/V5 (Kamitani and Tong, 2006; Rezk et al, 2020).…”
Section: Discussionmentioning
confidence: 96%
See 1 more Smart Citation
“…Recent works have suggested that hMT+/V5 and hPT are part of a network involved in the processing and integration of audio-visual motion information(Gurtubay-Antolin et al ., 2021). For instance, in addition to its well-documented role for visual motion, hMT+/V5 also responds preferentially to auditory motion (Poirier et al ., 2005) and contains information about planes of auditory motion using a similar representational format in vision and audition (Rezk et al ., 2020).…”
Section: Introductionmentioning
confidence: 99%