When you walk into a room, you perceive visual information that is both close to you and farther in depth. In the current study, we investigated how visual search is affected by information across scene depth and contrasted it with the effect of semantic scene context. Across two experiments, participants performed search for target objects appearing either in the foreground or background regions within scenes that were either normally configured or had semantically mismatched foreground and background contexts (Chimera scenes; Castelhano, Fernandes, & Theriault, 2018). In Experiment 1, we found participants had shorter latencies and fewer fixations to the target. This pattern was not explained by target size. In Experiment 2, a preview of the scene prior to search was added to better establish scene context prior to search. Results again show a Foreground Bias, with faster search performance for foreground targets. Together, these studies suggest processing differences across depth in scenes, with a preference for objects closer in space.