Different experiential traces (i.e., linguistic, motor and perceptual) are likely contributing to the organization of human semantic knowledge. Here, we aimed to address this issue by investigating whether visual experience may affect the sensitivity to semantic distributional cues from natural language. We conducted an independent reanalysis of data from Bottini et al. (2022), in which early blind and sighted participants performed an auditory lexical decision task. Since previous research has shown that semantic neighborhood density – the mean distance between a target word and its closest semantic neighbors – can influence performance in lexical decision tasks, we investigated whether vision may alter the reliance on this semantic index. We demonstrate that early blind participants are more sensitive to semantic neighborhood density than sighted participants, as indicated by the significantly faster response times for words with higher levels of semantic neighborhood density shown by the blind group. These findings suggest that an early lack of visual experience may lead to enhanced sensitivity to the distributional history of words in natural language, deepening in turn our understanding of the strict interplay between linguistic and perceptual experience in the organization of conceptual knowledge.
Recent evidence has indicated that spatial representations, such as large-scale geographical maps, can be retrieved from natural language alone through cognitively plausible distributional-semantic models based on non-spatial associative-learning mechanisms. Here, we demonstrate that analogous spatial maps can be extracted from purely linguistic data even at the medium-scale level. Our results indeed show that it is possible to retrieve the underground maps of five European cities from linguistic data, suggesting in turn that the ability to reconstruct spatial maps from language does not strictly depend on the scale being mapped. Furthermore, we show that different spatial representations (i.e., with information encoded either in terms of relative spatial distances or absolute locations defined by coordinate axes) can be retrieved from natural language. These findings contribute to a growing body of research that challenges the traditional view of cognitive maps as exclusively relying on specialized spatial computations and highlight the importance of non-spatial associative-learning mechanisms within the linguistic environment in the setting of spatial representations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.