IntroductionThe notion of a single localized store of word representations has become increasingly less plausible as evidence has accumulated for the widely distributed neural representation of wordform grounded in motor, perceptual, and conceptual processes. Here, we attempt to combine machine learning methods and neurobiological frameworks to propose a computational model of brain systems potentially responsible for wordform representation. We tested the hypothesis that the functional specialization of word representation in the brain is driven partly by computational optimization. This hypothesis directly addresses the unique problem of mapping sound and articulation vs. mapping sound and meaning.ResultsWe found that artificial neural networks trained on the mapping between sound and articulation performed poorly in recognizing the mapping between sound and meaning and vice versa. Moreover, a network trained on both tasks simultaneously could not discover the features required for efficient mapping between sound and higher-level cognitive states compared to the other two models. Furthermore, these networks developed internal representations reflecting specialized task-optimized functions without explicit training.DiscussionTogether, these findings demonstrate that different task-directed representations lead to more focused responses and better performance of a machine or algorithm and, hypothetically, the brain. Thus, we imply that the functional specialization of word representation mirrors a computational optimization strategy given the nature of the tasks that the human brain faces.
The Subregular Hypothesis (Heinz 2010) states that only patterns with specific subregular computational properties are phonologically learnable. Lai (2015) provided the initial laboratory support for this hypothesis. The current study aimed to replicate and extend the earlier findings by using a different experimental paradigm (oddball task) and a different measure of learning (sensitivity index, d′). Specifically, we compared the learnability of two phonotactic patterns that differ computationally and typologically: a simple rule ("First-Last Assimilation") that requires agreement between the first and last segment of a word (predicted to be unlearnable), and a harmony rule ("Sibilant Harmony") that requires the agreement of features throughout the word (predicted to be learnable). The First-Last Assimilation rule was tested under two experimental conditions: one where the training data were also consistent with the Sibilant Harmony rule, and one where the training data were only consistent with the First-Last rule. As in Lai (2015), we found that participants were significantly more sensitive to violations of the Sibilant Harmony (SH) rule than to the First-Last Assimilation (FL) rules. However, unlike Lai (2015), we also found that participants showed some residual sensitivity to the First-Last rule, but that sensitivity interacted with rule type so that participants were significantly more sensitive to SH rule violations. We conclude that participants in Artificial Grammar Learning experiments exhibit evidence of Universal Grammar constraining their learning, but patterns predicted to be unlearnable as a linguistic system can still be learned to some degree, due to non-linguistic learning mechanisms.
Processes governing the creation, perception and production of spoken words are sensitive to the patterns of speech sounds in the language user’s lexicon. Generative linguistic theory suggests that listeners infer constraints on possible sound patterning from the lexicon and apply these constraints to all aspects of word use. In contrast, emergentist accounts suggest that these phonotactic constraints are a product of interactive associative mapping with items in the lexicon. To determine the degree to which phonotactic constraints are lexically mediated, we observed the effects of learning new words that violate English phonotactic constraints (e.g., srigin) on phonotactic perceptual repair processes in nonword consonant-consonant-vowel (CCV) stimuli (e.g., /sre/). Subjects who learned such words were less likely to “repair” illegal onset clusters (/sr/) and report them as legal ones (/∫r/). Effective connectivity analyses of MRI-constrained reconstructions of simultaneously collected magnetoencephalography (MEG) and EEG data showed that these behavioral shifts were accompanied by changes in the strength of influences of lexical areas on acoustic-phonetic areas. These results strengthen the interpretation of previous results suggesting that phonotactic constraints on perception are produced by top-down lexical influences on speech processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.