Sensory perception is dramatically influenced by the context. Models of contextual neural surround effects in vision have mostly accounted for Primary Visual Cortex (V1) data, via nonlinear computations such as divisive normalization. However, surround effects are not well understood within a hierarchy, for neurons with more complex stimulus selectivity beyond V1. We utilized feedforward deep neural networks and developed a gradient-based technique to visualize the most suppressive and excitatory surround. We found that deep neural networks exhibited a key signature of surround effects in V1, highlighting center stimuli that visually stand out from the surround and suppressing responses when the surround stimulus is similar to the center. Even when the center stimulus was altered, the most suppressive surround surprisingly followed. This ties to notions of efficient coding and salience perception, although the networks were trained to classify images. Through the visualization approach, we generalized previous understanding of surround effects to more complex stimuli, in ways that have not been revealed in visual cortices. Our results emerged without specialized nonlinear computations, but due to subtraction and the stacking of layers. We identified further successes including V2 surround data for textures that cannot be explained by divisive normalization models, along with mismatches to the biology that could not be explained by the feedforward deep neural networks. Our results provide a testable hypothesis of surround effects in higher visual cortices, and the visualization approach could be adopted in future biological experimental designs.