2019
DOI: 10.1016/j.neucom.2019.08.053
|View full text |Cite
|
Sign up to set email alerts
|

Online multimodal dictionary learning

Abstract: We propose a new online approach for multimodal dictionary learning. The method developed in this work addresses the great challenges posed by the computational resource constraints in dynamic environment when dealing with large scale tensor sequences. Given a sequence of tensors, i.e. a set composed of equal-size tensors, the approach proposed in this paper allows to infer a basis of latent factors that generate these tensors by sequentially processing a small number of data samples instead of using the whole… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 22 publications
0
18
0
Order By: Relevance
“…Constraints, which impose smoothness in time and nonnegativity or sparsity, may also be included depending on the application context, such as, for example, in nonnegative online CPD for time-evolving topic modeling (with possible applications in the analysis of social media-generated data on the Covid-19 pandemic) [27] or in tensor dictionary learning [41]. Possible solution approaches include alternating optimization with alternating direction method of multipliers (AO-ADMM) [42].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Constraints, which impose smoothness in time and nonnegativity or sparsity, may also be included depending on the application context, such as, for example, in nonnegative online CPD for time-evolving topic modeling (with possible applications in the analysis of social media-generated data on the Covid-19 pandemic) [27] or in tensor dictionary learning [41]. Possible solution approaches include alternating optimization with alternating direction method of multipliers (AO-ADMM) [42].…”
Section: Related Workmentioning
confidence: 99%
“…Applications of OTF abound. They include unveiling the topology of evolving networks [70], spatio-temporal prediction or image in-painting [41], multiple-input multiple-output (MIMO) wireless communications [13], [71], brain imaging [72], monitoring heart-related features from wearable sensors for multi-lead electro-cardiography (ECG) [73], anomaly detection in networks and topic modeling [16], structural health monitoring (in an internet of things (IoT) context) [36], online cartography (spectrum map reconstruction in cognitive radio networks) [14], detection of anomalies in the process of 3D printing [74], data traffic monitoring in networks [10], [16], cardiac MRI [10], stream data compression (e.g., in power distribution systems [75] or in video [76]), and online completion [10], [77], [78], among others.…”
Section: Related Workmentioning
confidence: 99%
“…Grussler et al verified that the rank-constrained Frobenius norm can outperform the nuclear norm [23]. Traoré et al proposed to enforce a sparsity constraint on the core tensor and low-rank ( F  norm) constraints on factor matrices, and learnt reconstructed patch for inpainting images using the factor matrices and the core tensor [24]. The approach was also flexible to incorporate nonnegativity and orthogonality constraints [24].…”
Section: Introductionmentioning
confidence: 99%
“…Traoré et al proposed to enforce a sparsity constraint on the core tensor and low-rank ( F  norm) constraints on factor matrices, and learnt reconstructed patch for inpainting images using the factor matrices and the core tensor [24]. The approach was also flexible to incorporate nonnegativity and orthogonality constraints [24]. Bahri et al proposed Kronecker component analysis (RKCA) method for image denoising.…”
Section: Introductionmentioning
confidence: 99%
“…In [15] and [16], authors compute the CPD of successive sub-tensors in an adaptive way but they assume that the rank is known and does not vary with time while in [17] the sub-tensors are treated independently using AutoTen for rank estimation. Eventually, in [18] authors introduced an Online Tucker decomposition.…”
Section: Introductionmentioning
confidence: 99%