2022
DOI: 10.1109/tit.2022.3142846
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Nonnegative Tensor Factorization and Completion With Noisy Observations

Abstract: Tensor decomposition is a powerful tool for extracting physically meaningful latent factors from multi-dimensional nonnegative data, and has been an increasing interest in a variety of fields such as image processing, machine learning, and computer vision. In this paper, we propose a sparse nonnegative Tucker decomposition and completion method for the recovery of underlying nonnegative data under noisy observations. Here the underlying nonnegative data tensor is decomposed into a core tensor and several facto… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 79 publications
(135 reference statements)
0
3
0
Order By: Relevance
“…Convolution implementations ( Bagherinezhad, Rastegari & Farhadi, 2017 ; Kim, Bae & Sunwoo, 2019 ) and quantization ( Gong et al, 2014 ) can also accelerate the deep neural networks. Tensor factorization has also been utilized to decompose weights into lightweight pieces ( Masana et al, 2017 ; Yuan & Dong, 2021 ; Zhang & Ng, 2022 ). As the field of artificial intelligence continues to grow and develop, the neural network models used for various tasks have become increasingly larger and more complex.…”
Section: Related Workmentioning
confidence: 99%
“…Convolution implementations ( Bagherinezhad, Rastegari & Farhadi, 2017 ; Kim, Bae & Sunwoo, 2019 ) and quantization ( Gong et al, 2014 ) can also accelerate the deep neural networks. Tensor factorization has also been utilized to decompose weights into lightweight pieces ( Masana et al, 2017 ; Yuan & Dong, 2021 ; Zhang & Ng, 2022 ). As the field of artificial intelligence continues to grow and develop, the neural network models used for various tasks have become increasingly larger and more complex.…”
Section: Related Workmentioning
confidence: 99%
“…The objective functional f (x i ) = 1 2 u 2 i . Then we can easily derive the solution of problem (7), and denoted as Hard-Thresholding operator [29], [39], [40]:…”
Section: Optimization Proceduresmentioning
confidence: 99%
“…Recent efforts have extended NTD to boost calculation efficiency and meet different demands in actual applications by incorporating suitable constrain conditions with NTD, including smoothness [16,17], graph Laplacian [18][19][20][21][22][23][24], sparsity [25], supervision [26][27][28], just to name a few. For examples, Liu et al stated a graph regularized L p smooth NTD method by adding the graph regularization and L p smooth constraint into NTD to retain smooth and more accurate solution of the objective function [17].…”
Section: Introductionmentioning
confidence: 99%