How do we find a target item in a visual world filled with distractors? A quarter of a century ago, in her influential 'Feature Integration Theory (FIT)', Treisman proposed a two-stage solution to the problem of visual search: a preattentive stage that could process a limited number of basic features in parallel and an attentive stage that could perform more complex acts of recognition, one object at a time. The theory posed a series of problems. What is the nature of that preattentive stage? How do serial and parallel processes interact? How does a search unfold over time? Recent work has shed new light on these issues.Visual search is one of those things we do all day, every day, from finding milk in the refrigerator to locating our car in the car-park. We pay others to do it at airport security checks and in radiology laboratories and, in the past quarter of a century, we have done a great deal of it in our research laboratories. Laboratory search tasks ask the observer to find and/or identify a target item among some number of distractor items. The core empirical fact that needs explanation is that some search tasks are easy and others are difficult (see Fig. 1). We assume that, if we could successfully describe the rules that govern human search behavior, we would be able to improve performance in critical applied search tasks and to offer suggestions to those trying to build machines that might do our search tasks for us. Visual search is also an experimentally tractable way to study selective attention, and it is increasingly clear that any useful theory of visual perception will require an understanding of the role of attention.Treisman's feature integration theory Visual search and the role of attention in search has been much discussed in recent literature (see [1][2][3][4][5] for reviews). This article will concentrate on several issues growing out of Anne Treisman's seminal Feature Integration Theory (FIT) [6]. It would be a great disservice to many other researchers to label FIT as the sole 'big bang' of the visual search universe. However, it does serve well as an organizing principle for a brief review of some interesting and long-running controversies.The original FIT proposed that visual search tasks could be dichotomized into 'preattentive' and 'attentive' categories. Preattentive processing was held to occur in parallel across most or all of the visual field in a single step and to be limited to a small set of basic features like color, size, motion, and orientation. Thus, you could preattentively find a red item among green. Operationally, preattentive search for a target defined by a single basic feature would produce reaction times (RTs) independent of the number of items in the display (set size). Thus, the slope of the function relating RT to set size would be near zero. Other tasks, like a search for a randomly oriented 'T' among 'L's could not be performed preattentively. Attentive processing was presumed to marshal the more extensive perceptual capabilities required to 'bind' features ...