2018
DOI: 10.1007/978-3-030-01267-0_50
|View full text |Cite
|
Sign up to set email alerts
|

Deep Component Analysis via Alternating Direction Neural Networks

Abstract: Despite a lack of theoretical understanding, deep neural networks have achieved unparalleled performance in a wide range of applications. On the other hand, shallow representation learning with component analysis is associated with rich intuition and theory, but smaller capacity often limits its usefulness. To bridge this gap, we introduce Deep Component Analysis (DeepCA), an expressive multilayer model formulation that enforces hierarchical structure through constraints on latent variables in each layer. For … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 15 publications
(25 citation statements)
references
References 38 publications
0
25
0
Order By: Relevance
“…], showing that cascading basis pursuit problems can lead to competitive deep learning constructions. However, the Layered Basis Pursuit formulation, or other similar variations that attempt to unfold neural network architectures [33], [44], do not minimize (P ) and thus their solutions only represent sub-optimal and heuristic approximations to the minimizer of the multi-layer BP. More clearly, such a series of steps never provide estimatesγ i that can generate a signal according to the multi-layer sparse model.…”
Section: Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…], showing that cascading basis pursuit problems can lead to competitive deep learning constructions. However, the Layered Basis Pursuit formulation, or other similar variations that attempt to unfold neural network architectures [33], [44], do not minimize (P ) and thus their solutions only represent sub-optimal and heuristic approximations to the minimizer of the multi-layer BP. More clearly, such a series of steps never provide estimatesγ i that can generate a signal according to the multi-layer sparse model.…”
Section: Algorithmsmentioning
confidence: 99%
“…While other works have indeed explored the unrolling of iterative algorithms in terms of CNNs (e.g. [33], [44]), we are not aware of any work that has attempted nor studied the unrolling of a global pursuit with convergence guarantees. Lastly, we demonstrate the performance of these networks in practice by training our models for image classification, consistently improving on the classical feed-forward architectures without introducing filters nor any other extra parameters in the model.…”
Section: Introductionmentioning
confidence: 99%
“…[29] and [45] attempt to unfold neural networks with iterative thresholding and minimizing ML-BP. As a result, each representation estimate is required to explain the immediate layer only and a signal based on generation in global multilayer sparse model settings is not possible.…”
Section: E Layered Basis Pursuitmentioning
confidence: 99%
“…Building upon recent connections between deep learning and sparse approximation [7], [8], [9], we introduce…”
Section: Introductionmentioning
confidence: 99%