2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00394
|View full text |Cite
|
Sign up to set email alerts
|

Generative Interventions for Causal Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(28 citation statements)
references
References 14 publications
1
27
0
Order By: Relevance
“…In most cases, we find that GAN transformations that modify style tend to be more beneficial than those that modify poses. This is in line with previous works that note the benefits of style-based training augmentations for image classification [41,14] and related positional sensitivities of classifiers [71,3,16]. In the more difficult Imagenet classification problem, we found performance degrades substantially during image projection, and therefore GAN perturbations offer limited benefits.…”
Section: Discussionsupporting
confidence: 91%
See 2 more Smart Citations
“…In most cases, we find that GAN transformations that modify style tend to be more beneficial than those that modify poses. This is in line with previous works that note the benefits of style-based training augmentations for image classification [41,14] and related positional sensitivities of classifiers [71,3,16]. In the more difficult Imagenet classification problem, we found performance degrades substantially during image projection, and therefore GAN perturbations offer limited benefits.…”
Section: Discussionsupporting
confidence: 91%
“…Here, we choose the StyleGAN2 generator [31], which focus on modeling a single object category (such as faces, cars, and cats), rather than class-conditional models like BigGAN [7] on 1000 ImageNet categories [53]. Due to the higher variation in multi-class data, image projection (a critical part of generating deep augmentations) is still not sufficiently fast and reliable across the full dataset [26,43,41]. Even within a single-class StyleGAN2 generator, some aspects of the original image cannot be accurately recovered, such as fine textures, ornate backgrounds, or non-canonical poses (shown in supplementary material), impacting downstream classification.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Pre-Trained GANs for Vision. Prior work has explored the use of GANs [27,65] in vision tasks such as classification [10,12,54,71,81], segmentation [55,76,79,87] and representation learning [7,20,21,23,35]. Likewise, we share the goal of leveraging the power of pre-trained deep generative models for vision tasks.…”
Section: Related Workmentioning
confidence: 99%
“…causes). When combined with statistical modeling (i.e., regression), one can estimate the functional relationship between interventions on causes and effects [Pearl, 2009a, Shalizi, 2013, Shi et al, 2019, Mao et al, 2020.…”
Section: Introductionmentioning
confidence: 99%