Investigations into the neural basis of reading have shed light on the cortical locus and the functional role of visual‐orthographic processing. Yet, the fine‐grained structure of neural representations subserving reading remains to be clarified. Here, we capitalize on the spatiotemporal structure of electroencephalography (EEG) data to examine if and how EEG patterns can serve to decode and reconstruct the internal representation of visually presented words in healthy adults. Our results show that word classification and image reconstruction were accurate well above chance, that their temporal profile exhibited an early onset, soon after 100 ms, and peaked around 170 ms. Further, reconstruction results were well explained by a combination of visual‐orthographic word properties. Last, systematic individual differences were detected in orthographic representations across participants. Collectively, our results establish the feasibility of EEG‐based word decoding and image reconstruction. More generally, they help to elucidate the specific features, dynamics, and neurocomputational principles underlying word recognition.
Dan Nemrodov and Shouyu Ling are contributed equally to this work.The University of Toronto has filed a U.S. patent application that includes portions of the method for feature selection described here. Adrian Nestor and Dan Nemrodov are co-inventors on this patent. AbstractRecent investigations have focused on the spatiotemporal dynamics of visual recognition by appealing to pattern analysis of EEG signals. While this work has established the ability to decode identity-level information (such as the identity of a face or of a word) from neural signals, much less is known about the precise nature of the signals that support such feats, their robustness across visual categories, or their consistency across human participants. Here, we address these questions through the use of EEG-based decoding and multivariate feature selection as applied to three visual categories: words, faces and face ensembles (i.e., crowds of faces). Specifically, we use recursive feature elimination to estimate the diagnosticity of time and frequencybased EEG features for identity-level decoding across three datasets targeting each of the three categories. We then relate feature diagnosticity across categories and across participants while, also, aiming to increase decoding performance and reliability. Our investigation shows that word and face processing are similar in their reliance on spatiotemporal information provided by occipitotemporal channels. In contrast, ensemble processing appears to also rely on central channels and exhibits a similar profile with word processing in the frequency domain. Further, we find that feature diagnosticity is stable across participants and is even capable of supporting cross-participant feature selection, as demonstrated by systematic boosts in decoding accuracy and feature reduction. Thus, our investigation sheds new light on the nature and the structure of the information underlying identity-level visual processing as well as on its generality across categories and participants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.