2021
DOI: 10.1109/tvcg.2020.2968433
|View full text |Cite
|
Sign up to set email alerts
|

DeepSketchHair: Deep Sketch-Based 3D Hair Modeling

Abstract: We present DeepSketchHair, a deep learning based tool for interactive modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicating the hair growing direction within a hair region), and automatically generates a 3D hair model, which matches the input sketch both globally and locally. The key enablers of our system are two carefully designed neural networks, namely, S2ONet, which converts… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 54 publications
0
13
0
Order By: Relevance
“…Data. Similar to the previous research [6,31,36], we collected 653 3D hair-strand models and aligned them to the same bust model within a boundary box. In addition, we also augment the data by horizontally flipping, scaling, and rotating.…”
Section: Irhairnetmentioning
confidence: 99%
See 1 more Smart Citation
“…Data. Similar to the previous research [6,31,36], we collected 653 3D hair-strand models and aligned them to the same bust model within a boundary box. In addition, we also augment the data by horizontally flipping, scaling, and rotating.…”
Section: Irhairnetmentioning
confidence: 99%
“…Second, the existing methods for inferring 3D orientation fields are either time-consuming due to the use of a complex searching and matching process based on a large hair dataset [6,11], or liable to over-smoothness due to the use of deep networks to directly achieve image-to-voxel inference [36]. Third, the conventional hair growth algorithm [31,39] to extract strands from the estimated 3D orientation field are inefficient and not conducive to one-shot hair modeling. Although Zhou et al [40] have attempted to ignore the hair growth procedure by directly regressing 3D hair strands, their reconstruction results are generally unsatisfactory (see Sec.…”
Section: Introductionmentioning
confidence: 99%
“…Hair modeling has been extensively explored in the computer graphics community. Most previous works have focused on reconstructing 3D hair models from either real images [Jakob et al 2009;Wei et al 2005;Zhang et al 2018; or userspecified sketches [Fu et al 2007;Hu et al 2015;Mao et al 2004;Shen et al 2020]. For example, given a single-view hair image for hair modeling, Chai et al [2013] utilize a few strokes to guide hair directions to reduce the ambiguity of growing hair strands from the hair image.…”
Section: Hair Modeling and Renderingmentioning
confidence: 99%
“…Due to the requirement of 3D inputs, it is difficult to apply their technique to our problem. Shen et al [2020] introduce a deep learning based framework for strand-level hair modeling based on 2D sketches to produce plausible 3D hairstyles. Although their method can generate high-quality results with realistic appearance and layering effects to some extent, mainly due to the use of 2D and 3D orientation fields as intermediate hair representations, their method is not capable of modeling hairstyles with complex structures like braided hairstyles.…”
Section: Hair Modeling and Renderingmentioning
confidence: 99%
“…Yang et al further extended the Hair‐GANs to capture dynamic hair geometries from monocular videos by designing a HairSpatNet and a HairTempNet. DeepSketchHair 16 interactively reconstruct hair geometries by mapping the strokes and contours to a 2D dense orientation map, and then generates a hair VVF. Unfortunately, these VVF‐based methods are memory expensive if increasing volumetric resolution and time consuming due to involving 3D convolution.…”
Section: Related Workmentioning
confidence: 99%