2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2016
DOI: 10.1109/icassp.2016.7472867
|View full text |Cite
|
Sign up to set email alerts
|

Sketching for large-scale learning of mixture models

Abstract: Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data. This sketch is a collection of generalized moments of the underlying probability distribution of the data. It can be computed in a single pass on the training set, and is easily computable on streams or distributed datasets. The proposed framework shares similarities with compressive se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
2

Relationship

5
2

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 47 publications
0
11
0
Order By: Relevance
“…Some techniques might further reduce these complexities. As detailed in [23], most operations in CKM can be narrowed down to performing multiplications by W and W T . Therefore, both computing the sketch and performing CKM could benefit from the replacement of W by a suitably randomized fast transform [6,7].…”
Section: Complexity Of the Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Some techniques might further reduce these complexities. As detailed in [23], most operations in CKM can be narrowed down to performing multiplications by W and W T . Therefore, both computing the sketch and performing CKM could benefit from the replacement of W by a suitably randomized fast transform [6,7].…”
Section: Complexity Of the Methodsmentioning
confidence: 99%
“…We compare our Matlab implementation of CKM, available at [23], with Matlab's kmeans function that implements Lloyd-Max.…”
Section: Setupmentioning
confidence: 99%
“…-For Compressive k-means and Compressive Gaussian Mixture Modeling (cf the companion paper [Gribonval et al, 2020]), the resulting optimization problem has been empirically addressed through the CL-OMPR algorithm [Keriven et al, 2015[Keriven et al, , 2016. Algorithmic success guarantees are an interesting challenge.…”
Section: Discussionmentioning
confidence: 99%
“…In this section, we present the results of two numerical experiments used to test the performance of the CL-AMP, CL-OMPR, and k-means++ algorithms. For k-means++, we used the implementation provided by MATLAB and, for CL-OMPR, we downloaded the MATLAB implementation from [17] and enabled the "++" initialization method. CL-OMPR and CL-AMP used the same sketch y, whose frequency vectors W were drawn using the method described in [5].…”
Section: Numerical Experimentsmentioning
confidence: 99%