2021
DOI: 10.1007/978-3-030-92659-5_23
|View full text |Cite
|
Sign up to set email alerts
|

AttrLostGAN: Attribute Controlled Image Synthesis from Reconfigurable Layout and Style

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…Recent developments focused on better instance representations [38], context-awareness [9], and improving the mask prediction of overlapping and nearby objects [24]. Recently, [27,6] enabled more explicit appearance control of individual objects by conditioning on attributes. However, the set of attributes is limited and lacks the ability to model complex interactions between objects.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Recent developments focused on better instance representations [38], context-awareness [9], and improving the mask prediction of overlapping and nearby objects [24]. Recently, [27,6] enabled more explicit appearance control of individual objects by conditioning on attributes. However, the set of attributes is limited and lacks the ability to model complex interactions between objects.…”
Section: Related Workmentioning
confidence: 99%
“…Our code is based on the official repositories of [36,37,6], and we use the pre-trained BERT model from [41] as our text encoder. For training, we use the Adam [17] optimizer with β 1 = 0.0 and β 2 = 0.999.…”
Section: Training Objectives and Implementation Detailsmentioning
confidence: 99%
See 2 more Smart Citations
“…In the image domain, both coarse-(e.g., sentence [68]) and fine-level (e.g., layout [70] and instance attribute [17]) control signals have been explored. The progress on the video side, on the other hand, has generally been more modest, in part due to an added challenge of synthesizing temporally coherent content.…”
Section: Introductionmentioning
confidence: 99%