2021
DOI: 10.48550/arxiv.2110.00473
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Score-Based Generative Classifiers

Abstract: The tremendous success of generative models in recent years raises the question whether they can also be used to perform classification. Generative models have been used as adversarially robust classifiers on simple datasets such as MNIST, but this robustness has not been observed on more complex datasets like CIFAR-10. Additionally, on natural image datasets, previous results have suggested a trade-off between the likelihood of the data and classification accuracy. In this work, we investigate score-based gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 11 publications
1
7
0
Order By: Relevance
“…This means that the local behavior of the discriminative classifier is determined by the entire set of classes, while our approach is purely local. In contrast, in score-based generative classification [74,95] each class has a generator, that provides the closest member for a given input. The class is then determined based on the best match.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This means that the local behavior of the discriminative classifier is determined by the entire set of classes, while our approach is purely local. In contrast, in score-based generative classification [74,95] each class has a generator, that provides the closest member for a given input. The class is then determined based on the best match.…”
Section: Discussionmentioning
confidence: 99%
“…Since each generator is described by a single manifold, the non-locality is avoided. So far, generative classifiers are not more robust [95,20] than discriminative models. Finally, studying a relatively small number of objects from different angles is more consistent with human development than learning on thousands of individual images.…”
Section: Discussionmentioning
confidence: 99%
“…diffusion models have become a powerful family of deep generation models [58], [59], with record breaking performance in many areas [60], including image generation [61], [30], [62], [63], image inpainting [64], image super-resolution [65], [66], [67], and image-to-image translation [68], [69], [70]. In addition, the feature representations learned from the diffusion models are also found to be very useful in discriminative tasks, including image classification [71], image segmentation [72], [73] and object detection [74]. A diffusion models is a deep generative model with two processes, namely the forward process and the reverse process [75].…”
Section: B Diffusion Modelsmentioning
confidence: 99%
“…Pacchiardi and Dutta [7] used score matching for training a neural conditional exponential family to approximate the ABC likelihood, and applied it in MCMC sampling for intractable distributions and to large-dimensional time-series model. Generative models have been used as adversarially robust classifiers for complex datasets, particularly in the image classification domain [29]. Zimmermann et al [29] investigated score-based generative classification of natural images, and found marginal advantage over discriminative classifiers in terms of adversarial robustness, yet it provides a different approach to classification.…”
Section: Future Workmentioning
confidence: 99%
“…Generative models have been used as adversarially robust classifiers for complex datasets, particularly in the image classification domain [29]. Zimmermann et al [29] investigated score-based generative classification of natural images, and found marginal advantage over discriminative classifiers in terms of adversarial robustness, yet it provides a different approach to classification.…”
Section: Future Workmentioning
confidence: 99%