2023
DOI: 10.48550/arxiv.2302.09347
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Closed-Loop Transcription via Convolutional Sparse Coding

Abstract: Autoencoding has achieved great empirical success as a framework for learning generative models for natural images. Autoencoders often use generic deep networks as the encoder or decoder, which are difficult to interpret, and the learned representations lack clear structure. In this work, we make the explicit assumption that the image distribution is generated from a multi-stage sparse deconvolution. The corresponding inverse map, which we use as an encoder, is a multi-stage convolution sparse coding (CSC), wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 40 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?