2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.98
|View full text |Cite
|
Sign up to set email alerts
|

A Generative Model of People in Clothing

Abstract: Figure 1: Random examples of people generated with our model. For each row, sampling is conditioned on the silhouette displayed on the left. Our proposed framework also supports unconditioned sampling as well as conditioning on local appearance cues, such as color. AbstractWe present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
207
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
4
1

Relationship

3
7

Authors

Journals

citations
Cited by 234 publications
(207 citation statements)
references
References 51 publications
0
207
0
Order By: Relevance
“…Consequently, it is not clear -despite excellent performance on standard benchmarks -how methods [59,25,37,26] generalize to in-the-wild images. To add variation, some methods resort to generating synthetic images [46,64,23] but it is complex to approximate fully realistic images with sufficient variance. Similar to model-based methods, learning approaches have benefited from the advent of robust 2D pose methods -by matching 2D detections to a 3D pose database [8,66], by regressing pose from 2D joint distance matrices [35], by exploiting pose and geometric priors for lifting [69,1,51,19,32,70,47]; or simply by training a feed forward network to directly predict 3D pose from 2D joints [30].…”
Section: Related Workmentioning
confidence: 99%
“…Consequently, it is not clear -despite excellent performance on standard benchmarks -how methods [59,25,37,26] generalize to in-the-wild images. To add variation, some methods resort to generating synthetic images [46,64,23] but it is complex to approximate fully realistic images with sufficient variance. Similar to model-based methods, learning approaches have benefited from the advent of robust 2D pose methods -by matching 2D detections to a 3D pose database [8,66], by regressing pose from 2D joint distance matrices [35], by exploiting pose and geometric priors for lifting [69,1,51,19,32,70,47]; or simply by training a feed forward network to directly predict 3D pose from 2D joints [30].…”
Section: Related Workmentioning
confidence: 99%
“…Generating human-centric images is an important sub-area of image synthesis. Example tasks range from generating full human body in clothing [18] to generating human action sequences [3]. Ma et al [23] are the first ones to approach the task of human pose transfer, which aims to generate a person image in a target pose if a reference image of that person is given beforehand.…”
Section: Related Workmentioning
confidence: 99%
“…Since current statistical models can not represent clothing, most works [7,26,40,68,48,32,22,67,31,50,8,33,44,46] are restricted to inferring body shape alone. Model fits have been used to virtually dress and manipulate people's shape and clothing in images [50,67,62,36]. None of these approaches recover 3D clothing.…”
Section: Related Workmentioning
confidence: 99%