How much can be seen in a single brief exposure ? This is an important problem because our normal mode of seeing greatly resembles a sequence of brief exposures. Erdmann and Dodge (1898) showed that in reading, for example, the eye assimilates information only in the brief pauses between its quick saccadic movements. The problem of what can be seen in one brief exposure, however, remains unsolved. The difficulty is that the simple expedient of instructing the observer of a single brief exposure to report what he has just seen is inadequate. When complex stimuli consisting of a number of letters are tachistoscopically presented, observers enigmatically insist that they have seen more than they can remember afterwards, that is, report afterwards.3 The apparently simple question: "What did you see ?" requires the observer to report both what he remembers and what he has forgotten.1 This paper is a condensation of a doctoral thesis (Sperling, 1959). For further details, especially on methodology, and for individual data, the reader is referred to the original thesis. It is a pleasure to acknowledge my gratitude to George A. Miller and Roger N. Shepard whose support made this researcli possible and to E. B. Newman, J. Schwartzbaum and S. S. Stevens for their many helpful suggestions. Thanks are also due to Jerome S. Brunei' for the use of his laboratory and his tachistoscope during his absence in the summer of 1957.
A model for visual recall tasks was presented in terms of visual information storage (VIS), scanning, rehearsal, and auditory information storage (AIS). It was shown first that brief visual stimuli are stored in VIS in a form similar to the sensory input. These visual “images” contain considerably more information than is transmitted later. They can be sampled by scanning for items at high rates of about 10 msec per letter. Recall is based on a verbal receding of the stimulus (rehearsal), which is remembered in AIS. The items retained in AIS are usually rehearsed again to prevent them from decaying. The human limits in immediate-memory (reproduction) tasks are inherent in the AIS-Rehearsal loop. The main implication of the model for human factors is the importance of the auditory coding in visual tasks.
Subjects first detected a target embedded in a stream of letters presented at the left of fixation and then, as quickly as possible, shifted their attention to a stream of numerals at the right of fixation. They attempted to report, in order, the four earliest occurring numerals after the target. Numerals appeared at rates of 4.6, 6.9, 9.2, and 13.4/s. Scaling analyses were made of(a) item scores, P~(r), the probability of a numeral from stimulus position i appearing in response position r, r = (1, 2, 3, 4), and (b) order scores, P~nj, the probability that a numeral from stimulus position i appeared earlier in the response than one from stimulus position j. For all subjects, targets, and numeral rates, the relative position of numerals in the response sequence showed clustering, disorder, and folding. Reported numerals tended to cluster around a stimulus position 400 ms after the target. The numerals were reported in an apparently haphazard order--at high numeral rates, inverted iBj pairs were as frequent as correct pairs. The actual order of report resulted from a mixture of correctly ordered numerals with numerals ordered in the direction opposite to their order of presentation (folding around the cluster center). These results are quantitatively described by a strength theory of order (precedence) and are efficiently predicted by a computational attention gating model (AGM). The AGM makes quantitatively correct predictions of over 500 values ofPl(r), P~Bj in 12 conditions with two attention and three to six detection parameters estimated for each subject. The AGM may be derived from a more general attention model that assumes (a) after detection of the target an attention gate opens briefly (with a bell-shaped time course) to allow numerals to enter a visual short-term memory, and (b) subsequent order of report depends on both item strength (how wide the gate was open during the numeral's entry) and on order information (item strength times cumulative strength of prior numerals).When an observer receives information from two or more distinct sources at once and is unable to process all of them, the observer may allocate processing capacity first to one source and then to another. We term such a transfer of processing capacity a shift of attention, although we do not imply that conscious awareness of the shift must occur. A classical example concerns a fistener at a cocktail party who attempts to listen simultaneously to two different conversations. If the listener is unable to process both conversations at once, the listener may pay attention first to one conversation and then shift attention to the other (Broadbent, 1958;Cherry, 1953). Our present research concerns an observer's ability to shift focal attention (Kahneman, 1973) between two sources of visual input.In studying visual attention, we used the RSVP attention shift paradigm
To some degree, all current models of visual motion-perception mechanisms depend on the power of the visual signal in various spatiotemporal-frequency bands. Here we show how to construct counterexamples: visual stimuli that are consistently perceived as obviously moving in a fixed direction yet for which Fourier-domain power analysis yields no systematic motion components in any given direction. We provide a general theoretical framework for investigating non-Fourier motion-perception mechanisms; central are the concepts of drift-balanced and microbalanced random stimuli. A random stimulus S is drift balanced if its expected power in the frequency domain is symmetric with respect to temporal frequency, that is, if the expected power in S of every drifting sinusoidal component is equal to the expected power of the sinusoid of the same spatial frequency, drifting at the same rate in the opposite direction. Additionally, S is microbalanced if the result WS of windowing S by any spacetime-separable function W is drift balanced. We prove that (i) any space-time-separable random (or nonrandom) stimulus is microbalanced; (ii) any linear combination of pairwise independent microbalanced (respectively, driftbalanced) random stimuli is microbalanced and drift balanced if the expectation of each component is uniformly zero; (iii) the convolution of independent microbalanced and drift-balanced random stimuli is microbalanced and drift balanced; (iv) the product of independent microbalanced random stimuli is microbalanced; and (v) the expected response of any Reichardt detector to any microbalanced random stimulus is zero at every instant in time. Examples are provided of classes of microbalanced random stimuli that display consistent and compelling motion in one direction. All the results and examples from the domain of motion perception are transposable to the spacedomain problem of detecting orientation in a texture pattern.
The time course of attention was experimentally observed using two kinds of stimuli: a cue to begin attending or to shift attention, and a stimulus to be attended. Precise measurements of the time course of attention show that it consists of two partially concurrent processes: a fast, effortless, automatic process that records the cue and its neighboring events; and a slower, effortful, controlled process that records the stimulus to be attended and its neighboring events.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.