2022
DOI: 10.1101/2022.04.06.487262
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multimodal deep neural decoding of visual object representation in humans

Abstract: Perception and categorization of objects in a visual scene are essential to grasp the surrounding situation. However, it is unclear how neural activities in spatially-distributed brain regions, especially in the aspect of temporal dynamics, represent visual objects. To address this issue, we explored spatial and temporal organization of visual object representations using concurrent functional magnetic resonance imaging (fMRI) and electroencephalography (EEG), combined with neural decoding by deep neural netwo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 80 publications
0
1
0
Order By: Relevance
“…The interpretation of EEG decoding models could also be benefited by their application to other sensory domains, as well as to the implementation of transfer learning between modalities (Hebart, Contier et al 2023, Watanabe, Miyoshi et al 2023. In particular, the latter possibility is interesting since the low-level regularities present in images do not represent a confound for the categorization of sounds, and vice-versa; however, this task is likely more difficult than the classification of unimodal percepts, since it may require the involvement of higher order trans-modal cortical regions, potentially implicating the consolidation of conscious perception, and thus rendering the RSVP paradigm suboptimal for this purpose (Del Cul, Baillet et al 2007).…”
Section: Discussionmentioning
confidence: 99%
“…The interpretation of EEG decoding models could also be benefited by their application to other sensory domains, as well as to the implementation of transfer learning between modalities (Hebart, Contier et al 2023, Watanabe, Miyoshi et al 2023. In particular, the latter possibility is interesting since the low-level regularities present in images do not represent a confound for the categorization of sounds, and vice-versa; however, this task is likely more difficult than the classification of unimodal percepts, since it may require the involvement of higher order trans-modal cortical regions, potentially implicating the consolidation of conscious perception, and thus rendering the RSVP paradigm suboptimal for this purpose (Del Cul, Baillet et al 2007).…”
Section: Discussionmentioning
confidence: 99%