Human observers have the remarkable ability to efficiently prioritize task-relevant over taskirrelevant visual information. Yet, a fundamental question remains whether this ability is limited to a single task relevant item, or whether multiple items can be prioritized simultaneously. The answer to this question depends on 1) whether observers can concurrently prepare and maintain multiple top-down templates for more than one target object, and 2) whether those templates can then, in parallel, bias selection towards more than one target in the visual input. Here we disentangle these two processes for the first time.We measured electroencephalographic (EEG) responses while observers searched for two color-defined targets among distractors. Crucially, we not only varied the number of target colors that observers anticipated (thus determining the number of target templates), but also the number of colors used to distinguish the two target objects present in the search display (thus determining the number of templates required to engage in actual selection).Multivariate classification of the EEG pattern allowed us to track the attentional enhancement of each target separately across time. Both behavioral and electrophysiological results revealed only a small cost associated with preparing two versus one color template. In contrast, substantial costs arose when two templates had to be engaged in the actual selection of search targets. Furthermore, the results indicate that this cost is based on limitations of parallel processing, rather than a serial bottleneck. These findings bridge currently diverging theoretical perspectives on capacity limitations of feature-based attention.