2021
DOI: 10.48550/arxiv.2106.00774
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimizing Functionals on the Space of Probabilities with Input Convex Neural Networks

Abstract: Gradient flows are a powerful tool for optimizing functionals in general metric spaces, including the space of probabilities endowed with the Wasserstein metric. A typical approach to solving this optimization problem relies on its connection to the dynamic formulation of optimal transport and the celebrated Jordan-Kinderlehrer-Otto (JKO) scheme. However, this formulation involves optimization over convex functions, which is challenging, especially in high dimensions. In this work, we propose an approach that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(14 citation statements)
references
References 26 publications
0
14
0
Order By: Relevance
“…The problem is a generalization of strong OT (2), weak OT (3), regularized OT (5); we call problem (6) a general OT problem. Surprisingly, regularized OT (5) represents the same problem: it is enough to put c(x, y) ≡ 0, γ = 1 and R(π) = F(π) to obtain general OT (6) from regularized OT (5). That is, regularized OT (5) and general OT (6) can be viewed as equivalent formulations.…”
Section: Background On Optimal Transportmentioning
confidence: 99%
See 1 more Smart Citation
“…The problem is a generalization of strong OT (2), weak OT (3), regularized OT (5); we call problem (6) a general OT problem. Surprisingly, regularized OT (5) represents the same problem: it is enough to put c(x, y) ≡ 0, γ = 1 and R(π) = F(π) to obtain general OT (6) from regularized OT (5). That is, regularized OT (5) and general OT (6) can be viewed as equivalent formulations.…”
Section: Background On Optimal Transportmentioning
confidence: 99%
“…10] consider analogous to (10) formulations restricted to convex potentials; they use Input Convex Neural Networks [6] to approximate them. These nets are popular in OT [41,56,19,11,5] but OT algorithms based on them are outperformed [40] by the above-mentioned unrestricted formulations. In [26,66,17,20], the authors propose methods for f -divergence regularized functionals (5).…”
Section: Related Workmentioning
confidence: 99%
“…Optimal Transport: Makkuva et al [10] explored using Input Convex Neural Networks for learning transportation maps, while Alvarez-Melis et al [25] and Mokrov et al [26] used ICNN's for the Kantorovich dual specifically in the setting of Wasserstein gradient flows [27]. Fan et al [28] also attempted solving the Wassertein Barycenter [29] problem using ICNNs.…”
Section: Related Workmentioning
confidence: 99%
“…The time and spatial discretizations of Wasserstein gradient flows are extensively studied in literature (Jordan et al, 1998;Junge et al, 2017;Carrillo et al, 2021a,b;Bonet et al, 2021;Liutkus et al, 2019;Frogner & Poggio, 2020). Recently, neural networks have been applied in solving or approximating Wasserstein gradient flows (Mokrov et al, 2021;Lin et al, 2021b,a;Alvarez-Melis et al, 2021;Bunne et al, 2021;Hwang et al, 2021;Fan et al, 2021). For sampling algorithms, di Langosco et al (2021) learns the transportation function by solving an unregularized variational problem in the family of vector-output deep neural networks.…”
Section: Introductionmentioning
confidence: 99%