2012
DOI: 10.1007/978-3-642-35740-4_17
|View full text |Cite
|
Sign up to set email alerts
|

iModel: Interactive Co-segmentation for Object of Interest 3D Modeling

Abstract: Abstract. We present an interactive system to create 3D models of objects of interest in their natural cluttered environments. A typical setting for 3D modeling of an object of interest involves capturing images from multiple views in a multi-camera studio with a mono-color screen or structured lighting. This is a tedious process and cannot be applied to a variety of objects. Moreover, general scene reconstruction algorithms fail to focus on the object of interest to the user. In this paper, we use successful … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 48 publications
(51 reference statements)
0
15
0
Order By: Relevance
“…We also use two standard datasets that have been used in prior automatic works [85]. We make all the datasets used in our works [53,55,56] publicly available 8 .…”
Section: Datasetsmentioning
confidence: 99%
“…We also use two standard datasets that have been used in prior automatic works [85]. We make all the datasets used in our works [53,55,56] publicly available 8 .…”
Section: Datasetsmentioning
confidence: 99%
“…At this stage, the algorithm knows which surface indicated by the user corresponds to the nonplanar object. We treat the scribbles corresponding to the nonplanar object as foreground scribbles and all other scribbles as background scribbles and use ideas from prior work by Kowdle et al [12] to obtain a 3D visual hull of the non-planar object via a 2-class co-segmentation, which is rendered using an independent mesh. The scene co-segmentation also allows us to create a composite texture map for the scene covering up holes due to occlusions as shown in Fig.3(a).…”
Section: Rendering Non-planar Objectsmentioning
confidence: 99%
“…, GMM p }. Specifically, we use colour features extracted from superpixels [12] on the labeled sites and fit GMMs for the corresponding classes. The data terms for all sites are then defined as the negative log-likelihood of the features given the class model.…”
Section: Scribbles To Scene Segmentationmentioning
confidence: 99%
See 2 more Smart Citations