Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners' ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.
Learning the structure of the environment (e.g., what usually follows what) enables animals to behave in an effective manner and prepare for future events. Unintentional learning is capable of efficiently producing such knowledge as has been demonstrated with the Artificial Grammar Learning paradigm (AGL), among others. It has been argued that selective attention is a necessary and sufficient condition for visual implicit learning. Experiment 1 shows that spatial attention is not sufficient for implicit learning. Learning does not occur if the stimuli instantiating the structure are task irrelevant. In a second experiment, we demonstrate that this holds even with abundance of available attentional resources. Together, these results challenge the current view of the relations between attention, resources, and implicit learning.
A major issue in visual scene recognition involves the extraction of recurring chunks from a sequence of complex scenes. Previous studies have suggested that this kind of learning is accomplished according to Bayesian principles that constrain the types of extracted chunks. Here we show that perceptual grouping cues are also incorporated in this Bayesian model, providing additional evidence for the possible span of chunks. Experiment 1 replicates previous results showing that observers can learn threeelement chunks without learning smaller, two-element chunks embedded within them. Experiment 2 shows that the very same embedded chunks are learned if they are grouped by perceptual cues, suggesting that perceptual grouping cues play an important role in chunk extraction from complex scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.