Componential theories of lexical semantics assume that concepts can be represented by sets of features or attributes that are in some sense primitive or basic components of meaning. The binary features used in classical category and prototype theories are problematic in that these features are themselves complex concepts, leaving open the question of what constitutes a primitive feature. The present availability of brain imaging tools has enhanced interest in how concepts are represented in brains, and accumulating evidence supports the claim that these representations are at least partly "embodied" in the perception, action, and other modal neural systems through which concepts are experienced. In this study we explore the possibility of devising a componential model of semantic representation based entirely on such functional divisions in the human brain. We propose a basic set of approximately 65 experiential attributes based on neurobiological considerations, comprising sensory, motor, spatial, temporal, affective, social, and cognitive experiences. We provide normative data on the salience of each attribute for a large set of English nouns, verbs, and adjectives, and show how these attribute vectors distinguish a priori conceptual categories and capture semantic similarity. Robust quantitative differences between concrete object categories were observed across a large number of attribute dimensions. A within- versus between-category similarity metric showed much greater separation between categories than representations derived from distributional (latent semantic) analysis of text. Cluster analyses were used to explore the similarity structure in the data independent of a priori labels, revealing several novel category distinctions. We discuss how such a representation might deal with various longstanding problems in semantic theory, such as feature selection and weighting, representation of abstract concepts, effects of context on semantic retrieval, and conceptual combination. In contrast to componential models based on verbal features, the proposed representation systematically relates semantic content to large-scale brain networks and biologically plausible accounts of concept acquisition.
It has long been documented that emotional and sensory events elicit a pupillary dilation. Is the pupil response a reliable marker of a visual detection event while viewing complex imagery? In two experiments where viewers were asked to report the presence of a visual target during rapid serial visual presentation (RSVP), pupil dilation was significantly associated with target detection. The amplitude of the dilation depended on the frequency of targets and the time of target presentation relative to the start of the trial. Larger dilations were associated with trials having fewer targets and with targets viewed earlier in the run. We found that dilation was influenced by, but not dependent on, the requirement of a button press. Interestingly, we also found that dilation occurred when viewers fixated a target but did not report seeing it. We will briefly discuss the role of noradrenaline in mediating these pupil behaviors.
How does the saccadic movement system select a target when visual, auditory, and planned movement commands differ? How do retinal, head-centered, and motor error coordinates interact during the selection process? Recent data on superior colliculus (SC) reveal a spreading wave of activation across buildup cells the peak activity of which covaries with the current gaze error. In contrast, the locus of peak activity remains constant at burst cells, whereas their activity level decays with residual gaze error. A neural model answers these questions and simulates burst and buildup responses in visual, overlap, memory, and gap tasks. The model also simulates data on multimodal enhancement and suppression of activity in the deeper SC layers and suggests a functional role for NMDA receptors in this region. In particular, the model suggests how auditory and planned saccadic target positions become aligned and compete with visually reactive target positions to select a movement command. For this to occur, a transformation between auditory and planned head-centered representations and a retinotopic target representation is learned. Burst cells in the model generate teaching signals to the spreading wave layer. Spreading waves are produced by corollary discharges that render planned and visually reactive targets dimensionally consistent and enable them to compete for attention to generate a movement command in motor error coordinates. The attentional selection process also helps to stabilize the map-learning process. The model functionally interprets cells in the superior colliculus, frontal eye field, parietal cortex, mesencephalic reticular formation, paramedian pontine reticular formation, and substantia nigra pars reticulata.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.