In natural situations, the auditory system is confronted with overlapping acoustic input from several simultaneously active sources at any given time. An important aspect of auditory processing is to sort the overlapping inputs into groups, each from a single source, a process called auditory scene analysis (ASA;Bregman, 1990). The ASA process facilitates our ability to choose one stream of information within a background of sounds emanating from many different sources, such as when listening to a stream of speech in a crowded room. The cocktail party provides an example of a situation in which it is common to attend to one stream of speech (one conversation in the room: the foreground ) while ignoring the rest of the auditory information (the other conversations and noise in the room: the background ).The present study focuses on a central scientific question in ASA theory: Which of the following views is more accurate?1. Streams are structured without focused attention, by preattentive processes operating over the whole set of sounds that are present. If this is true, more than one stream can be formed at the same time, despite the fact that attention may choose one or another for further processing.2. Attention is required to form a stream. Under this hypothesis, only a single stream is ever present as a structured perceptual entity-the one created by attention out of a subset of the sounds that are present. The unattended tones are simply an unstructured background. As attention shifts, different streams are formed, enduring only as long as attention remains focused on the stream.To address the question, we tested the extent to which unattended sounds are processed when one auditory stream is selected from a set of potential ones. Specifically, by using tones spanning three frequency ranges, we were In three experiments, we addressed the issue of attention effects on unattended sound processing when one auditory stream is selected from three potential streams, creating a simple model of the cocktail party situation. We recorded event-related brain potentials (ERPs) to determine the way in which unattended, task-irrelevant sounds were stored in auditory memory (i.e., as one integrated stream or as two distinct streams). Subjects were instructed to ignore all the sounds and attend to a visual task or to selectively attend to a subset of the sounds and perform a task with the sounds (Experiments 1 and 2). A third (behavioral) experiment was conducted to test whether global pattern violations (used in Experiments 1 and 2) were perceptible when the sounds were segregated. We found that the mismatch negativity ERP component, an index of auditory change detection, was evoked by infrequent pattern violations occurring in the unattended sounds when all the sounds were ignored, but not when attention was focused on a subset of the sounds. The results demonstrate that multiple unattended sound streams can segregate by frequency range but that selectively attending to a subset of the sounds can modify the extent to whic...