2022
DOI: 10.48550/arxiv.2205.13098
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization

Abstract: The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational problem. We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation. This SDP can be viewed as an approximation of the Wasserstein gradient in a broader function family including two-laye… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…When h = √ 2I we recover (3.33). When this condition does not hold, so that D(θ, ρ t ) = I, the equation requires knowledge of the score function ∇ θ log ρ t (θ t ); and particle methods to approximate (3.38) will require estimates of the score; see [96,132,119,17]. See also [120] and references therein for discussion of score estimation.…”
Section: Mean-field Dynamicsmentioning
confidence: 99%
“…When h = √ 2I we recover (3.33). When this condition does not hold, so that D(θ, ρ t ) = I, the equation requires knowledge of the score function ∇ θ log ρ t (θ t ); and particle methods to approximate (3.38) will require estimates of the score; see [96,132,119,17]. See also [120] and references therein for discussion of score estimation.…”
Section: Mean-field Dynamicsmentioning
confidence: 99%