Computer vision attention processes assign variable-hypothesized importance to different parts of the visual input and direct the allocation of computational resources. This nonuniform allocation might help accelerate the image analysis process. This paper proposes a new bottom-up attention mechanism. Rather than taking the traditional approach, which tries to model human attention, we propose a validated stochastic model to estimate the probability that an image part is of interest. We refer to this probability as saliency and thus specify saliency in a mathematically well-defined sense. The model quantifies several intuitive observations, such as the greater likelihood of correspondence between visually similar image regions and the likelihood that only a few of interesting objects will be present in the scene. The latter observation, which implies that such objects are (relaxed) global exceptions, replaces the traditional preference for local contrast. The algorithm starts with a rough preattentive segmentation and then uses a graphical model approximation to efficiently reveal which segments are more likely to be of interest. Experiments on natural scenes containing a variety of objects demonstrate the proposed method and show its advantages over previous approaches.
The effect of distractor homogeneity and target-distractor similarity on visual search was previously explored under two models designed for computer vision. We extend these models here to account for internal noise and to evaluate their ability to predict human performance. In four experiments, observers searched for a horizontal target among distractors of different orientation (orientation search; Experiments 1 and 2) or a gray target among distractors of different color (color search; Experiments 3 and 4). Distractor homogeneity and target-distractor similarity were systematically manipulated. We then tested our models' ability to predict the search performance of human observers. Our models' predictions were closer to human performance than those of other prominent quantitative models.
Over-segmentation, or super-pixel generation, is a common preliminary stage for many computer vision applications. New acquisition technologies enable the capturing of 3D point clouds that contain color and geometrical information. This 3D information introduces a new conceptual change that can be utilized to improve the results of over-segmentation, which uses mainly color information, and to generate clusters of points we call super-points. We consider a variety of possible 3D extensions of the Local Variation (LV) graph based over-segmentation algorithms, and compare them thoroughly. We consider different alternatives for constructing the connectivity graph, for assigning the edge weights, and for defining the merge criterion, which must now account for the geometric information and not only color. Following this evaluation, we derive a new generic algorithm for over-segmentation of 3D point clouds. We call this new algorithm Point Cloud Local Variation (PCLV). The advantages of the new oversegmentation algorithm are demonstrated on both outdoor and cluttered indoor scenes. Performance analysis of the proposed approach compared to state-of-the-art 2D and 3D over-segmentation algorithms shows significant improvement according to the common performance measures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citationsβcitations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.