Visual search relies on the ability to use information about the target in working memory to guide attention and make target-match decisions. This memory representation is referred to as the “attentional” or “target” template and is typically thought to contain veridical target information. However, more recent studies have shown that in complex visual environments, target-associated information is often used to guide attention (Battistoni et al., 2017; de Lange et al., 2018; Peelen et al., 2024; Vo et al., 2019; Yu et al., 2023). This is particularly true when the target is difficult to discriminate (Zhou & Geng, 2024). Here, we use fMRI and multivariate pattern analysis, to test if attentional guidance by target-associated information is explicitly represented in the preparatory period before search begins, either in conjunction with the target or even in place of it. Participants were first trained on four face-scene category pairings. After learning, they engaged in a cued visual search task that began with a face cue, followed by a delay, and then a search display with two lateralized faces, each superimposed on a scene image. The target face appeared on its previously associated scene on 75% of “scene-valid” trials. Our results show that in the cue period, face information was decoded in the fusiform face area (FFA), superior parietal lobule (SPL), and dorsolateral prefrontal cortex (dLPFC). However, during the delay period, face information was no longer decoded from FFA; instead, scene information was now decoded in the parahippocampal place area (PPA) and inferior frontal junction (IFJ). These findings demonstrate the dynamic nature of template information during visual search and suggest that target-associated information can be used as a guiding template, even replacing actual target information in the active attentional template.