It is well-known that we can tune attention to specific features (e.g., colors). Originally, it was believed that attention would always be tuned to the exact feature value of the sought-after target (e.g., orange). However, subsequent studies showed that selection is often geared towards target-dissimilar items, which was variably attributed to (1) tuning attention to the relative target feature that distinguishes the target from other items in the surround (e.g., reddest item; relational tuning), (2) tuning attention to a shifted target feature that allows more optimal target selection (e.g., reddish orange; optimal tuning), or (3) broad attentional tuning and selection of the most salient item that is still similar to the target (combined similarity/saliency). The present study used a color search task and assessed gaze capture by differently coloured distractors to distinguish between the three accounts. The results of the first experiment showed that a very target-dissimilar distractor that matched the relative color of the target but was outside of the area of optimal tuning still captured very strongly. As shown by a control condition and a control experiment, bottom-up saliency modulated capture only weakly, ruling out a combined similarity-saliency account. With this, the results support the relational account that attention is tuned to the relative target feature (e.g., reddest), not an optimal feature value or the target feature. Top-down tuning mechanisms Several different mechanisms have been proposed to explain how exactly attention is top-down tuned to a known target feature. Among the first theories of