At present, people are inclined to use one saliency detection method to cover all the pixels in an image. However, every method has its own limitations. A single method may not yield a good performance at all image scenes. In this article, we propose a new adaptive framework to detect salient objects. For each pixel in an image, it adaptively selects an appropriate method according to the pixel context relationship. In our framework, an image is characterized by a set of binary maps, which are generated by randomly thresholding the image's initial saliency map. And then, we utilize the surroundedness cue, which are obtained by a series of operations on the binary maps, to classify all the pixels in an image. Furthermore, based on the classes, we choose methods to detect salient objects. Extensive experimental results on three benchmark datasets demonstrate that our method performs favorable against 11 state‐of‐the‐art methods.