How variable is the functionally-defined structure of early visual areas in human cortex and how much variability is shared between twins? Here we quantify individual differences in the best understood functionally-defined regions of cortex: V1, V2, V3. The Human Connectome Project includes retinotopic measurements from 181 subjects, most of whom are twins. We trained four "anatomists" to manually define V1-V3 using retinotopic features. These definitions were more accurate than automated anatomical templates and showed that surface areas for these maps varied more than three-fold across individuals. The cortical magnification function also differed substantially among individuals: the relative amount of cortex devoted to central vision varied by more than a factor of 2. Whereas our twin sample sizes were too small to make precise heritability estimates (50 monozygotic pairs, 34 dizygotic), they nonetheless reveal high correlations, consistent with strong effects of the combination of shared genes and environment on visual area size. In V1, intraclass correlations of surface area between twin pairs were 84% and 68% for monozygotic and dizygotic pairs, respectively. The correlations were also high for V2 (81%, 73%) and V3 (75%, 43%). A trend for higher monozygotic than dizygotic size correlations, as well as greater similarity in map properties amongst monozygotic twins, suggest that visual area size and topography are partly genetically determined. Collectively, these results comprise the most accurate account of individual variability in visual area structure to date, and provide a robust population benchmark against which new individuals and developmental and clinical populations can be compared.
How variable is the functionally-defined structure of early visual areas in human cortex and how much variability is shared between twins? Here we quantify individual differences in the best understood functionally-defined regions of cortex: V1, V2, V3. The Human Connectome Project 7T Retinotopy Dataset includes retinotopic measurements from 181 subjects, including many twins. We trained four "anatomists" to manually define V1-V3 using retinotopic features. These definitions were more accurate than automated anatomical templates and showed that surface areas for these maps varied more than three-fold across individuals. This three-fold variation was little changed when normalizing visual area size by the surface area of the entire cerebral cortex. In addition to varying in size, we find that visual areas vary in how they sample the visual field. Specifically, the cortical magnification function differed substantially among individuals, with the relative amount of cortex devoted to central vision varying by more than a factor of 2. To complement the variability analysis, we examined the similarity of visual area size and structure across twins. Whereas the twin sample sizes are too small to make precise heritability estimates (50 monozygotic pairs, 34 dizygotic pairs), they nonetheless reveal high correlations, consistent with strong effects of the combination of shared genes and environment on visual area size. Collectively, these results provide the most comprehensive account of individual variability in visual area structure to date, and provide a robust population benchmark against which new individuals and developmental and clinical populations can be compared.
Motor imagery (MI) based brain-computer interface (BCI) is an important BCI paradigm which requires powerful classifiers. Recent development of deep learning technology has prompted considerable interest in using deep learning for classification and resulted in multiple models. Finding the best performing models among them would be beneficial for designing better BCI systems and classifiers going forward. However, it is difficult to directly compare performance of various models through the original publications, since the datasets used to test the models are different from each other, too small, or even not publicly available. In this work, we selected five MI-EEG deep classification models proposed recently: EEGNet, Shallow & Deep ConvNet, MB3D and ParaAtt, and tested them on two large, publicly available, databases with 42 and 62 human subjects. Our results show that the models performed similarly on one dataset while EEGNet performed the best on the second with a relatively small training cost using the parameters that we evaluated.
Objective: EEG-based brain-computer interfaces (BCI) are non-invasive approaches for replacing or restoring motor functions in impaired patients, and direct brain-to-device communication in the general population. Motor imagery (MI) is one of the most used BCI paradigms, but its performance varies across individuals and certain users require substantial training to develop control. In this study, we propose to integrate a MI paradigm simultaneously with a recently proposed Overt Spatial Attention (OSA) paradigm, to accomplish BCI control. Methods: We evaluated a cohort of 25 human subjects' ability to control a virtual cursor in one- and two-dimensions over 5 BCI sessions. The subjects used 5 different BCI paradigms: MI alone, OSA alone, MI and OSA simultaneously towards the same target (MI+OSA), and MI for one axis while OSA controls the other (MI/OSA and OSA/MI). Results: Our results show that MI+OSA reached the highest average online performance in 2D tasks at 49% Percent Valid Correct (PVC), statistically outperforms MI alone (42%), and was higher, but not statistically significant, than OSA alone (45%). MI+OSA had a similar performance to each subject's best individual method between MI alone and OSA alone (50%) and 9 subjects reached their highest average BCI performance using MI+OSA. Conclusion: Integrating MI and OSA leads to improved performance over MI alone at the group level and is the best BCI paradigm option for some subjects. Significance: This work proposes a new BCI control paradigm that integrates two existing paradigms and demonstrates its value by showing that it can improve users' BCI performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.