2012
DOI: 10.1007/978-3-642-33715-4_59
|View full text |Cite
|
Sign up to set email alerts
|

N-tuple Color Segmentation for Multi-view Silhouette Extraction

Abstract: We present a new method to extract multiple segmentations of an object viewed by multiple cameras, given only the camera calibration. We introduce the n-tuple color model to express interview consistency when inferring in each view the foreground and background color models permitting the final segmentation. A color n-tuple is a set of pixel colors associated to the n projections of a 3D point. The first goal is set as finding the MAP estimate of background/foreground color models based on an arbitrary sample … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(19 citation statements)
references
References 14 publications
0
19
0
Order By: Relevance
“…Additionally, we show in Fig. 2 a comparison with an automatic state of the art multi-view image segmentation method, i.e., the approach presented in [12], on a dataset where the object of interest, the car, is not fully visible in all of the images. It can be seen how, although both the algorithms produce acceptable results, our approach is able to correctly classify the car's pixel even behind vegetation.…”
Section: Segmentation On Calibrated Imagesmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, we show in Fig. 2 a comparison with an automatic state of the art multi-view image segmentation method, i.e., the approach presented in [12], on a dataset where the object of interest, the car, is not fully visible in all of the images. It can be seen how, although both the algorithms produce acceptable results, our approach is able to correctly classify the car's pixel even behind vegetation.…”
Section: Segmentation On Calibrated Imagesmentioning
confidence: 99%
“…It can be seen how, although both the algorithms produce acceptable results, our approach is able to correctly classify the car's pixel even behind vegetation. Moreover, the technique [12] applied to the Museum dataset (the one used in Fig. 1) cannot produce any usable result due to the fact that the object of interest (the tree) is not in the center of the images.…”
Section: Segmentation On Calibrated Imagesmentioning
confidence: 99%
See 1 more Smart Citation
“…The MRF is solved once per frame to extract the binary mattes for all views simultaneously. Our solution of all views in a single step contrasts with existing methods that either propagate labels between pairs of independently solved views [5], or build foreground appearance models across all views but then independently solve the segmentation for each view [6]. Additionally, we contribute a propagation strategy in which soft labelling constraints are carried forward in time to influence the MRFs defined over subsequent frames, so enhancing the coherence of the resulting matte sequence.…”
Section: Introductionmentioning
confidence: 98%
“…Weaker geometric information is incorporated by Sarim et al who propagate trimaps between views by matching along the epipolar line [5]. Recently Djelouah et al [6] proposed a geometry-free approach, building foreground and background appearance models simultaneously across views using expectation maximisation. However to perform the segmentation itself a set of independent graph-cuts (via GrabCut [19]) are used to extract a matte from each view in isolation using those models.…”
Section: A Related Workmentioning
confidence: 99%