2018
DOI: 10.1145/3203197
|View full text |Cite
|
Sign up to set email alerts
|

3D Sketching using Multi-View Deep Volumetric Prediction

Abstract: Figure 1: Our sketch-based modeling system can process as little as a single perspective drawing (a) to predict a volumetric object (b). Users can refine this prediction and complete it with novel parts by providing additional drawings from other viewpoints (c). This iterative sketching workflow allows quick 3D concept exploration and rapid prototyping (d).

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
122
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 116 publications
(124 citation statements)
references
References 76 publications
2
122
0
Order By: Relevance
“…16: Ablations study on V2VNet. The method by Delanoy et al [37] tends to generate vector fields whose values are inconsistent with the ones in the previous view and thus are often chaotic (the middle column). For each method, we show the corresponding results in the previous and current views.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…16: Ablations study on V2VNet. The method by Delanoy et al [37] tends to generate vector fields whose values are inconsistent with the ones in the previous view and thus are often chaotic (the middle column). For each method, we show the corresponding results in the previous and current views.…”
Section: Methodsmentioning
confidence: 99%
“…Huang et al [36] introduce deep convolutional neural network (CNN) for mapping 2D sketches to procedural model parameters. Delanoy et al [37] propose an end-to-end CNN, which is trained to generate 3D models from 2D multiview sketches. In [30], Li et al learn 2D middle maps to guide robust 3D modeling from 2D sketches.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The CNN is used to predict depth/ normal maps using flow field regression and a confidence map, which gives the ambiguity at each point of the input sketch. As with Delanoy et al (2018), the user first draws a single viewpoint of the object which is rendered as a 3D object. The user can then either further modify the surface by drawing curves over the surface, or providing depth values at sparse sample points, or reuse the frontal sketch to draw the back view of the object.…”
Section: Interactive Sketchesmentioning
confidence: 99%
“…The authors provide a visual comparison between the models generated by their system and those generated by similar user interfaces Delanoy et al (2018) Use a CNN to provide an initial 3D reconstruction which is updated when the user draws another viewpoint. Can be used to model man-made objects with flat, orthogonal faces but through the addition and subtraction of primitives also supports holes and convex protrusions Evaluated by two expert users and six other participants with limited drawing/3D modeling skills.…”
Section: Interactive Sketchesmentioning
confidence: 99%