2023
DOI: 10.3390/math11122674
|View full text |Cite
|
Sign up to set email alerts
|

Matrix Factorization Techniques in Machine Learning, Signal Processing, and Statistics

Abstract: Compressed sensing is an alternative to Shannon/Nyquist sampling for acquiring sparse or compressible signals. Sparse coding represents a signal as a sparse linear combination of atoms, which are elementary signals derived from a predefined dictionary. Compressed sensing, sparse approximation, and dictionary learning are topics similar to sparse coding. Matrix completion is the process of recovering a data matrix from a subset of its entries, and it extends the principles of compressed sensing and sparse appro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 372 publications
0
4
0
Order By: Relevance
“…In this regard, the factor structure serves as a filter to gain regularization in estimation. Using factor models this way is common in many signal-processing, genetics, and machine-learning domains (Bhattacharya & Dunson, 2011;Du et al, 2023;Sanyal & Ferreira, 2012). The limitation, however, is that the current development is inappropriate for inference on the latent factor structure.…”
Section: Discussionmentioning
confidence: 99%
“…In this regard, the factor structure serves as a filter to gain regularization in estimation. Using factor models this way is common in many signal-processing, genetics, and machine-learning domains (Bhattacharya & Dunson, 2011;Du et al, 2023;Sanyal & Ferreira, 2012). The limitation, however, is that the current development is inappropriate for inference on the latent factor structure.…”
Section: Discussionmentioning
confidence: 99%
“…The NABBH algorithm improves the convergence speed by simplifying the computational complexity of the inverse operation of Hessian matrix [43]; that is, only the inverse matrix of the first derivative matrix of the function is calculated, and the second derivative matrix of the function is omitted. A step-size selection strategy is designed to speed up the convergence of the algorithm.…”
Section: Performance Analysis Of Adaptive Parameter Selection Methodsmentioning
confidence: 99%
“…By introducing a non-monotonic Wolfe-type strategy into the memory gradient method, the global optimal solution is obtained. The purpose of improving convergence speed is achieved by adding step-size constraints [43]. In theory, the proposed adaptive parameter selection method has better global convergence and convergence speeds.…”
Section: Performance Analysis Of Adaptive Parameter Selection Methodsmentioning
confidence: 99%
“…This decomposition exposes underlying structures and patterns within the data. The primary aim is to approximate the original matrix using these simplified components, which enhances the manageability of data analysis [1].…”
Section: Emotion Detectionmentioning
confidence: 99%