are typically more difficult to identify than concrete words in lexical-decision, word-naming, and recall tasks. This behavioral advantage, known as the concreteness effect, is often considered as evidence for embodied semantics, which emphasizes the role of sensorimotor experience in the comprehension of word meaning. In this view, online sensorimotor simulations triggered by concrete words, but not by abstract words, facilitate access to word meaning and speed up word identification. To test whether perceptual simulation is the driving force underlying the concreteness effect, we compared data from early-blind and sighted individuals performing an auditory lexical-decision task. Subjects were presented with property words referring to abstract (e.g., "logic"), concrete multimodal (e.g., "spherical"), and concrete unimodal visual concepts (e.g., "blue"). According to the embodied account, the processing advantage for concrete unimodal visual words should disappear in the early blind because they cannot rely on visual experience and simulation during semantics processing (i.e., purely visual words should be abstract for early-blind people). On the contrary, we found that both sighted and blind individuals are faster when processing multimodal and unimodal visual words compared with abstract words. This result suggests that the concreteness effect does not depend on perceptual simulations but might be driven by modality-independent properties of word meaning.
How perceptual information is encoded into language and conceptual knowledge is a debated topic in cognitive (neuro)science. We present modality exclusivity norms for 643 Italian property words referring to all five perceptual modalities, plus a set of abstract words. Overall, words were rated as mostly connected to the visual modality and least connected to the olfactory and gustatory modality. We found that words associated to visual and auditory experience were more unimodal compared to words associated to other sensory modalities. A principal components analysis highlighted a strong coupling between gustatory and olfactory information in word meaning, and the tendency of words referring to tactile experience to also include information from the visual dimension. Abstract words were found to encode only marginal perceptual information, mostly from visual and auditory experience. The modality norms were augmented with corpus–based (e.g., Zipf Frequency, Orthographic Levenshtein Distance 20) and ratings–based psycholinguistic variables (Age of Acquisition, Familiarity, Contextual Availability). Split-half correlations performed for each experimental variable con- firmed that our norms are highly reliable. This database thus provides a new important tool for investigating the interplay between language, perception and cognition.
How perceptual information is encoded into language and conceptual knowledge is a debated topic in cognitive (neuro)science. We present modality norms for 643 Italian adjectives, which referred to one of the five perceptual modalities or were abstract. Overall, words were rated as mostly connected to the visual modality and least connected to the olfactory and gustatory modality. We found that words associated to visual and auditory experience were more unimodal compared to words associated to other sensory modalities. A principal components analysis highlighted a strong coupling between gustatory and olfactory information in word meaning, and the tendency of words referring to tactile experience to also include information from the visual dimension. Abstract words were found to encode only marginal perceptual information, mostly from visual and auditory experience. The modality norms were augmented with corpus–based (e.g., Zipf Frequency, Orthographic Levenshtein Distance 20) and ratings–based psycholinguistic variables (Age of Acquisition, Familiarity, Contextual Availability). Split-half correlations performed for each experimental variable and comparisons with similar databases confirmed that our norms are highly reliable. This database thus provides a new important tool for investigating the interplay between language, perception and cognition.
Perceptual systems heavily rely on prior knowledge and predictions to make sense of the environment. Predictions can originate from multiple sources of information, including contextual short-term priors, based on isolated temporal situations, and context-independent long-term priors, arising from extended exposure to statistical regularities. While short-term predictions have been well-documented, long-term predictions have received limited support, especially in the auditory domain. To address this, we recorded magnetoencephalography data from native speakers of two languages with different word orders (Spanish: functor-initial versus Basque: functor-final) listening to simple sequences of binary sounds alternating in duration with occasional omissions. We hypothesized that, together with contextual transition probabilities, the auditory system uses the characteristic prosodic cues (duration) associated with the native language's word order as an internal model to generate long-term predictions about incoming non-linguistic sounds. Consistent with our hypothesis, we found that the amplitude of the mismatch negativity elicited by sound omissions varied orthogonally depending on the speaker's linguistic background and was most pronounced in the left auditory cortex. Importantly, listening to binary sounds alternating in pitch instead of duration did not yield group differences, confirming that the above results were driven by the hypothesized long-term duration prior. These findings show that experience with a given language can shape a fundamental aspect of human perception - the neural processing of rhythmic sounds - and provides direct evidence for a long-term predictive coding system in the auditory cortex that uses auditory schemes learned over a lifetime to process incoming sound sequences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.