Previous studies have shown that perceptual segregation increases after listening to longer tone sequences, an effect known as buildup. More recently, an effect of prior frequency separation (⌬ƒ) has been discovered: presenting tone sequences with a small ⌬ƒ biases following sequences with an intermediate ⌬ƒ to be segregated into two separate streams, whereas presenting context sequences with a large ⌬ƒ biases following sequences to be integrated into one stream. Here we investigated how attention and task demands influenced these effects of prior stimuli by having participants perform one of three tasks during the context: making streaming judgments on the tone sequences, detecting amplitude modulation in the tones, and performing a visual task while ignoring the tones. Results from two experiments showed that although the effect of prior ⌬ƒ was present across all conditions, the effect was reduced whenever streaming judgments were not made during the context. Experiment 2 showed that streaming was reduced during the beginning of a test sequence only when participants performed the visual task during the context. These experiments suggest that task-based and stimulus-based attention differentially affect distinct influences of prior stimuli, and are consistent with the contribution of distinct levels of processing that affect auditory segregation.
Public Significance StatementThis study shows how the perception of sound is influenced by attention and prior experience. The human brain can organize identical sounds into different perceptions depending on what sounds were previously heard. Although this happens to a greater degree when the previously heard sounds are attended, it can still occur with decreased levels of attention.
In the presence of a continually changing sensory environment, maintaining stable but flexible awareness is paramount, and requires continual organization of information. Determining which stimulus features belong together, and which are separate is therefore one of the primary tasks of the sensory systems. Unknown is whether there is a global or sensory-specific mechanism that regulates the final perceptual outcome of this streaming process. To test the extent of modality independence in perceptual control, an auditory streaming experiment, and a visual moving-plaid experiment were performed. Both were designed to evoke alternating perception of an integrated or segregated percept. In both experiments, transient auditory and visual distractor stimuli were presented in separate blocks, such that the distractors did not overlap in frequency or space with the streaming or plaid stimuli, respectively, thus preventing peripheral interference. When a distractor was presented in the opposite modality as the bistable stimulus (visual distractors during auditory streaming or auditory distractors during visual streaming), the probability of percept switching was not significantly different than when no distractor was presented. Conversely, significant differences in switch probability were observed following within-modality distractors, but only when the pre-distractor percept was segregated. Due to the modality-specificity of the distractor-induced resetting, the results suggest that conscious perception is at least partially controlled by modality-specific processing. The fact that the distractors did not have peripheral overlap with the bistable stimuli indicates that the perceptual reset is due to interference at a locus in which stimuli of different frequencies and spatial locations are integrated.
The present study sought to test whether perceptual segregation of concurrently played sounds is impaired in schizophrenia (SZ), whether impairment in sound segregation predicts difficulties with a real-world speech-in-noise task, and whether auditory-specific or general cognitive processing accounts for sound segregation problems. Participants with SZ and healthy controls (HCs) performed a mistuned harmonic segregation task during recording of event-related potentials (ERPs). Participants also performed a brief speech-in-noise task. Participants with SZ showed deficits in the mistuned harmonic task and the speech-in-noise task, compared to HCs. No deficit in SZ was found in the ERP component related to mistuned harmonic segregation at around 150 ms (the object-related negativity or ORN), but instead showed a deficit in processing at around 400 ms (the P4 response). However, regression analyses showed that indexes of education level and general cognitive function were the best predictors of sound segregation difficulties, suggesting non-auditory specific causes of concurrent sound segregation problems in SZ.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.