Superpixel segmentation methods are widely used in computer vision applications due to their properties in border delineation. These methods do not usually take into account any prior object information. Although there are a few exceptions, such methods significantly rely on the quality of the object information provided and present high computational cost in most practical cases. Inspired by such approaches, we propose Object-based Dynamic and Iterative Spanning Forest (ODISF), a novel object-based superpixel segmentation framework to effectively exploit prior object information while being robust to the quality of that information. ODISF consists of three independent steps: (i) seed oversampling; (ii) dynamic path-based superpixel generation; and (iii) object-based seed removal. After (i), steps (ii) and (iii) are repeated until the desired number of superpixels is finally reached. Experimental results show that ODISF can surpass state-of-the-art methods according to several metrics, while being significantly faster than its object-based counterparts.
Superpixel segmentation methods aim to partition the image into homogeneous connected regions of pixels (i.e., superpixels) such that the union of its comprising superpixels precisely defines the objects of interest. However, the homogeneity criterion is often based solely on color, which, in certain conditions, might be insufficient for inferring the extension of the objects (e.g., low gradient regions). In this dissertation, we address such issue by incorporating prior object information — represented as monochromatic object saliency maps — into a state-of-the-art method, the Iterative Spanning Forest (ISF) framework, resulting in a novel framework named Object-based ISF (OISF). For a given saliency map, OISF-based methods are capable of increasing the superpixel resolution within the objects of interest, whilst permitting a higher adherence to the map’s borders, when color is insufficient for delineation. We compared our work with state-of-the-art methods, considering two classic superpixel segmentation metrics, in three datasets. Experimental results show that our approach presents effective object delineation with a significantly lower number of superpixels than the baselines, especially in terms of preventing superpixel leaking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.