2020
DOI: 10.48550/arxiv.2006.06280
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity

Sang-gil Lee,
Sungwon Kim,
Sungroh Yoon

Abstract: Normalizing flows (NFs) have become a prominent method for deep generative models that allow for an analytic probability density estimation and efficient synthesis. However, a flow-based network is considered to be inefficient in parameter complexity because of reduced expressiveness of bijective mapping, which renders the models prohibitively expensive in terms of parameters. We present an alternative of parameterization scheme, called NanoFlow, which uses a single neural density estimator to model multiple t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…As shown in Figure2, we divide all flow steps (f 1 , f 2 , ..., f K ) into several groups and share the model parameters of N N (WaveNet-like network, see Appendix A.1) in the coupling layers among flow steps in a group. Our grouped parameter sharing mechanism is similar to the shared neural density estimator proposed in[12] with some differences that: 1) we simplify the model by removing the flow indication embedding since the unshared conditional projection layer in different flow steps can help the model to indicate the position of the step; 2) instead of sharing the parameters among all flow steps, we generalize the sharing mechanism by sharing the parameters among flow steps in a group, making it easier to adjust the number of trainable model parameters without changing the model architecture.…”
mentioning
confidence: 99%
“…As shown in Figure2, we divide all flow steps (f 1 , f 2 , ..., f K ) into several groups and share the model parameters of N N (WaveNet-like network, see Appendix A.1) in the coupling layers among flow steps in a group. Our grouped parameter sharing mechanism is similar to the shared neural density estimator proposed in[12] with some differences that: 1) we simplify the model by removing the flow indication embedding since the unshared conditional projection layer in different flow steps can help the model to indicate the position of the step; 2) instead of sharing the parameters among all flow steps, we generalize the sharing mechanism by sharing the parameters among flow steps in a group, making it easier to adjust the number of trainable model parameters without changing the model architecture.…”
mentioning
confidence: 99%