2022
DOI: 10.48550/arxiv.2204.03145
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DeepTensor: Low-Rank Tensor Decomposition with Deep Network Priors

Abstract: DeepTensor is a computationally efficient framework for low-rank decomposition of matrices and tensors using deep generative networks. We decompose a tensor as the product of low-rank tensor factors (e.g., a matrix as the outer product of two vectors), where each low-rank tensor is generated by a deep network (DN) that is trained in a self-supervised manner to minimize the mean-square approximation error. Our key observation is that the implicit regularization inherent in DNs enables them to capture nonlinear … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 55 publications
(88 reference statements)
0
3
0
Order By: Relevance
“…Besides that, the most similar works to TDNet are [47], [48], [49], but these works are mainly designed for hyperspectral image recovery tasks. In [47], the authors used deep generative networks for low-rank decomposition, and the method is robust to a wide range of distributions.…”
Section: Autoencoder Based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides that, the most similar works to TDNet are [47], [48], [49], but these works are mainly designed for hyperspectral image recovery tasks. In [47], the authors used deep generative networks for low-rank decomposition, and the method is robust to a wide range of distributions.…”
Section: Autoencoder Based Methodsmentioning
confidence: 99%
“…Besides that, the most similar works to TDNet are [47], [48], [49], but these works are mainly designed for hyperspectral image recovery tasks. In [47], the authors used deep generative networks for low-rank decomposition, and the method is robust to a wide range of distributions. Luo et al [48] utilized a multilayer neural network to learn the nonlinear nature of multidimensional images, which is more effective than the linear transforms.…”
Section: Autoencoder Based Methodsmentioning
confidence: 99%
“…However, DIPs exhibit good performance only when over-parameterized and are tied to a grid-like discretized representation of the signal, implying DIPs do not scale to high dimensional signals such as point clouds with a large number of points. The issue of computational cost has been addressed to a certain extent by the deep decoder [21] and the DeepTensor [39], but they still need the signal to be defined as a regular data grid such as a 2D matrix or 3D tensor.…”
Section: Prior Workmentioning
confidence: 99%