2016
DOI: 10.48550/arxiv.1610.03483
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning in Implicit Generative Models

Abstract: Generative adversarial networks (GANs) provide an algorithmic framework for constructing generative models with several appealing properties: they do not require a likelihood function to be specified, only a generating procedure; they provide samples that are sharp and compelling; and they allow us to harness our knowledge of building highly accurate neural network classifiers. Here, we develop our understanding of GANs with the aim of forming a rich view of this growing area of machine learning-to build conne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
183
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 139 publications
(191 citation statements)
references
References 37 publications
0
183
0
Order By: Relevance
“…As we will show, many generic implicit generative modelling algorithms -i.e. those which are not designed to exploit a particular structure in the target distribution class -are of this type, including those for training quantum circuit Born machines [LW18a,CMDK20,ML17]. As such, hardness results in the SQ model apply to many implicit generative modelling algorithms of practical interest, and in particular to those which are often used for the concept class of interest in this work.…”
Section: Overview Of This Workmentioning
confidence: 99%
“…As we will show, many generic implicit generative modelling algorithms -i.e. those which are not designed to exploit a particular structure in the target distribution class -are of this type, including those for training quantum circuit Born machines [LW18a,CMDK20,ML17]. As such, hardness results in the SQ model apply to many implicit generative modelling algorithms of practical interest, and in particular to those which are often used for the concept class of interest in this work.…”
Section: Overview Of This Workmentioning
confidence: 99%
“…Suppose, in addition, that one has access to an exact binary classifier d * (x), which outputs the probability that the sample x originated from q θ (x). Then, assuming uniform prior probabilities for the two classes, it is straightforward to show via Bayes' theorem that (see Section II B in [50])…”
Section: Adversarial Generative Modelling With F -Divergencesmentioning
confidence: 99%
“…However, under the assumption that we can efficiently sample from both distributions, we can train a classifier d φ (x), parameterised by φ, to distinguish between the two distributions. One can use any proper scoring rule to train the classifier [50]. A typical choice is the negative cross entropy, given by…”
Section: Adversarial Generative Modelling With F -Divergencesmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with VI that uses explicit density models, PIS uses an implicit model and has the advantage of free-form network design. The explicit density models have weaker expressive power and flexibility compared with implicit models, both theoretically and empirically (Cornish et al, 2020;Chen et al, 2019;Kingma & Welling, 2013;Mohamed & Lakshminarayanan, 2016). Compared with MCMC, PIS is more efficient and is able to generate high-quality samples with fewer steps.…”
Section: Introductionmentioning
confidence: 99%