2020
DOI: 10.1109/tpami.2019.2900306
|View full text |Cite
|
Sign up to set email alerts
|

Structured Low-Rank Matrix Factorization: Global Optimality, Algorithms, and Applications

Abstract: Recently, convex formulations of low-rank matrix factorization problems have received considerable attention in machine learning. However, such formulations often require solving for a matrix of the size of the data matrix, making it challenging to apply them to large scale datasets. Moreover, in many applications the data can display structures beyond simply being low-rank, e.g., images and videos present complex spatio-temporal structures that are largely ignored by standard low-rank methods. In this paper w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
112
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 87 publications
(112 citation statements)
references
References 29 publications
0
112
0
Order By: Relevance
“…Assisted by a structured sparse coding algorithm 63 , we then performed a semiautomatic segmentation of our movies acquired from a two-photon imaging session to acquire regions-ofinterest (ROIs) corresponding to individual neurons. From each ROI, we generated brightnessover-times (BOTs).…”
Section: Two-photon Imaging Analysismentioning
confidence: 99%
“…Assisted by a structured sparse coding algorithm 63 , we then performed a semiautomatic segmentation of our movies acquired from a two-photon imaging session to acquire regions-ofinterest (ROIs) corresponding to individual neurons. From each ROI, we generated brightnessover-times (BOTs).…”
Section: Two-photon Imaging Analysismentioning
confidence: 99%
“…Figure 1 illustrates the effect of trend filtering on a couple v components. One important difference compared to previous denoising approaches (Haeffele et al, 2014;Pnevmatikakis et al, 2016) is that the TF model is more flexible than the sparse autoregressive model that is typically used to denoise calcium imaging data: the TF model does not require the estimation of any sparsity penalties or autoregressive coefficients, and can handle a mixture of positive and negative fluctuations, while the sparse nonnegative autoregressive model can not (by construction). This is important in this context since each component in V can include multiple cellular components (potentially with different timescales), mixed with both negative and positive weights.…”
Section: Denoising and Compressionmentioning
confidence: 99%
“…Other work (Haeffele et al, 2014;Pnevmatikakis et al, 2016;de Pierrefeu et al, 2018) has explored penalized matrix decomposition incorporating sparsity or total variation penalties in related contexts. An important strength of our proposed approach is the focus on highly scalable patch-wise computations (similar to CaImAn); this leads to fast computations and avoids overfitting by (implicitly) imposing strong sparsity constraints on the spatial matrix U.…”
Section: Related Workmentioning
confidence: 99%
“…In particular, it learns similarity graph from the data itself without considering other prior information. Consequently, some similarity information might get lost, which should be helpful for our graph learning [29,30]. On the other hand, pre-serving similarity information has been shown to be important for feature selection [31].…”
Section: Introductionmentioning
confidence: 99%