Looking for goal-relevant objects in our various environments is one of the most ubiquitous tasks the human visual system has to accomplish (Wolfe, 1998). Visual search is guided by a number of separable selective-attention mechanisms that can be categorized as bottom-up driven – guidance by salient physical properties of the current stimuli – or top-down controlled – guidance by observers' “online” knowledge of search-critical object properties (e.g., Liesefeld and Müller, 2019). In addition, observers' expectations based on past experience also play also a significant role in goal-directed visual selection. Because sensory environments are typically stable, it is beneficial for the visual system to extract and learn the environmental regularities that are predictive of (the location of) the target stimulus. This perspective article is concerned with one of these predictive mechanisms: statistical context learning of consistent spatial patterns of target and distractor items in visual search. We review recent studies on context learning and its adaptability to incorporate consistent changes, with the aim to provide new directions to the study of processes involved in the acquisition of search-guiding context memories and their adaptation to consistent contextual changes – from a three-pronged, psychological, computational, and neurobiological perspective.