2021
DOI: 10.48550/arxiv.2106.06345
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Proximal Optimal Transport Modeling of Population Dynamics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…Pushing forward a probability distribution by the proximal operator corresponds to one step of the JKO scheme for Wasserstein gradient flow of a linear functional in the space of distributions [Jordan et al, 1998, Benamou et al, 2016. Compared to recent works on neural Wasserstein gradient flow [Mokrov et al, 2021, Hwang et al, 2021, Bunne et al, 2021, where a separate network is needed to parameterize the pushforward map for every JKO step, our linear functional yields a pushforward map that is identical for each step; this property allows us to use a single neural network as a parameterization.…”
Section: Related Workmentioning
confidence: 99%
“…Pushing forward a probability distribution by the proximal operator corresponds to one step of the JKO scheme for Wasserstein gradient flow of a linear functional in the space of distributions [Jordan et al, 1998, Benamou et al, 2016. Compared to recent works on neural Wasserstein gradient flow [Mokrov et al, 2021, Hwang et al, 2021, Bunne et al, 2021, where a separate network is needed to parameterize the pushforward map for every JKO step, our linear functional yields a pushforward map that is identical for each step; this property allows us to use a single neural network as a parameterization.…”
Section: Related Workmentioning
confidence: 99%
“…Their work does not, however, consider the invertible mapping as a multivariate quantile function and-like other work on normalizing flows trained using maximum likelihooduses a parametrization in the "reverse" direction (from the target to the Gaussian reference distribution). The same idea of using the gradient of an input-convex neural network model to define functions that are solutions to optimal transport problems under quadratic cost has also been proposed in Bunne et al (2021), albeit embedded in a larger architecture for modeling population dynamics.…”
Section: Related Workmentioning
confidence: 99%
“…Input convex neural network (ICNN) (Amos et al, 2017) is a neural network with special constraints on its architecture such that it is convex with respect to (a part of) its input. ICNN has demonstrated successful applications in various optimal transport and optimal control problems (Bunne et al, 2021;Chen et al, 2019;Huang et al, 2021;Makkuva et al, 2020). Moreover, it has been proved that, under mild assumptions, ICNN and its gradient can universally approximate convex functions (Chen et al, 2019) and their gradients (Huang et al, 2021), respectively.…”
Section: Partially Input Convex Neural Networkmentioning
confidence: 99%
“…The time and spatial discretizations of Wasserstein gradient flows are extensively studied in literature (Jordan et al, 1998;Junge et al, 2017;Carrillo et al, 2021a,b;Bonet et al, 2021;Liutkus et al, 2019;Frogner & Poggio, 2020). Recently, neural networks have been applied in solving or approximating Wasserstein gradient flows (Mokrov et al, 2021;Lin et al, 2021b,a;Alvarez-Melis et al, 2021;Bunne et al, 2021;Hwang et al, 2021;Fan et al, 2021). For sampling algorithms, di Langosco et al (2021) learns the transportation function by solving an unregularized variational problem in the family of vector-output deep neural networks.…”
Section: Introductionmentioning
confidence: 99%