2019
DOI: 10.1109/tip.2019.2897941
|View full text |Cite
|
Sign up to set email alerts
|

An Iterative Spanning Forest Framework for Superpixel Segmentation

Abstract: Superpixel segmentation has become an important research problem in image processing. In this paper, we propose an Iterative Spanning Forest (ISF) framework, based on sequences of Image Foresting Transforms, where one can choose i) a seed sampling strategy, ii) a connectivity function, iii) an adjacency relation, and iv) a seed pixel recomputation procedure to generate improved sets of connected superpixels (supervoxels in 3D) per iteration. The superpixels in ISF structurally correspond to spanning trees root… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
70
0
15

Year Published

2019
2019
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 72 publications
(86 citation statements)
references
References 53 publications
1
70
0
15
Order By: Relevance
“…In order to generate consistent and compact superpixels, the simple linear iterative clustering (SLIC) algorithm [30] is used here which segregates an image into groups with approximately similar characteristics. The major advantage of SLIC method is that it creates groups and there is a very low possibility of grouping pixels of dissimilar characteristics [31]. Moreover, it involves a very low computational cost and a tradeoff is possible between accuracy and classification time.…”
Section: B Superpixel Based Feature Extraction 1) Superpixel Segmentmentioning
confidence: 99%
“…In order to generate consistent and compact superpixels, the simple linear iterative clustering (SLIC) algorithm [30] is used here which segregates an image into groups with approximately similar characteristics. The major advantage of SLIC method is that it creates groups and there is a very low possibility of grouping pixels of dissimilar characteristics [31]. Moreover, it involves a very low computational cost and a tradeoff is possible between accuracy and classification time.…”
Section: B Superpixel Based Feature Extraction 1) Superpixel Segmentmentioning
confidence: 99%
“…Although this method could more directly obtain superpixel borders to fit with the object boundary, it limits the pairwise link for vertical and horizon junctions to preserve the superpixel regularity, which results in a lower object boundary adherence rate. Hence, Munoz et al propose iterative spanning forest (ISF) to extract superpixels to maintain object boundary adherence [25]. It introduces a mixture seed sampling strategy to adaptively place seed numbers according to region contents, which solves the superpixel segmentation regularity issue via graph-tree theory.…”
Section: Related Workmentioning
confidence: 99%
“…For each image, let us suppose that a fine partition is produced by an initial segmentation (for instance a set of superpixels [1] [24] [43], the basins produced by a classical watershed algorithm [27], or a segmentation into individual pixels/flat zones) and contains all contours making sense in the image. We define a dissimilarity measure between adjacent tiles of this fine partition.…”
Section: Hierarchies and Partitions 21 Graph-based Hierarchical Segmmentioning
confidence: 99%
“…Image segmentation has been shown to be inherently a multi-scale problem [17]. That is why hierarchical segmentation has become a major trend in image segmentation and most top-performance segmentation techniques [4] [33] [36] [26] [46] [43] fall into this category: hierarchical segmentation does not output a single partition of the image pixels into sets but instead a single multi-scale structure that aims at capturing relevant objects at all scales. Research on this topic is still vivid as differential area profiles [31], robust segmentation of high-dimensional data [16] as well as theoretical aspects regarding the concept of partition lattice [39] [37] and optimal partition in a hierarchy [19] [20] [47].…”
Section: Introductionmentioning
confidence: 99%