Statistical learning is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying distributional properties of the input. Recent studies examining whether there are commonalities in the learning of distributional information across different domains or modalities consistently reveal, however, modality and stimulus specificity. An important question is, therefore, how and why a hypothesized domain-general learning mechanism systematically produces such effects. We offer a theoretical framework according to which statistical learning is not a unitary mechanism, but a set of domain-general computational principles, that operate in different modalities and therefore are subject to the specific constraints characteristic of their respective brain regions. This framework offers testable predictions and we discuss its computational and neurobiological plausibility.
Statistical learning (SL) is involved in a wide range of basic and higher-order cognitive functions and is taken to be an important building block of virtually all current theories of information processing. In the last two decades, a large and continuously growing research community has therefore focused on the ability to extract embedded patterns of regularity in time and space. This work has mostly focused on transitional probabilities, in vision, audition, by newborns, children, adults, in normal developing and clinical populations. Here we appraise this research approach, we critically assess what it has achieved, what it has not, and why it is so. We then center on present SL research to examine whether it has adopted novel perspectives. These discussions lead us to outline possible blueprints for a novel research agenda.
A core challenge in the semantic ambiguity literature is understanding why the number and relatedness among a word's interpretations are associated with different effects in different tasks. An influential account (Hino, Pexman, & Lupker [2006. Ambiguity and relatedness effects in semantic tasks: Are they due to semantic coding? Journal of Memory and Language 55 (2), 247-273]) attributes these effects to qualitative differences in the response system. We propose instead that these effects reflect changes over time in settling dynamics within semantics. We evaluated the accounts using a single task, lexical decision, thus holding the overall configuration of the response system constant, and manipulated task difficulty -and the presumed amount of semantic processing -by varying nonword wordlikeness and stimulus contrast. We observed that as latencies increased, the effects generally (but not universally) shifted from those observed in standard lexical decision to those typically observed in different tasks with longer latencies. These results highlight the importance of settling dynamics in explaining many ambiguity effects, and of integrating theories of semantic dynamics and response systems.
ARTICLE HISTORY
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.