2004
DOI: 10.1049/ip-vis:20040304
|View full text |Cite
|
Sign up to set email alerts
|

Simple mixture model for sparse overcomplete ICA

Abstract: The use of mixture of Gaussians (MoGs) for noisy and overcomplete independent component analysis (ICA) when the source distributions are very sparse is explored. The sparsity model can often be justified if an appropriate transform, such as the modified discrete cosine transform, is used. Given the sparsity assumption, a number of simplifying approximations are introduced to the observation density that avoid the exponential growth of mixture components. An efficient clustering algorithm is derived whose compl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
50
0

Year Published

2004
2004
2014
2014

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 80 publications
(50 citation statements)
references
References 17 publications
0
50
0
Order By: Relevance
“…3 Moreover, the parameter , which determines the width of the Cauchy distribution, is assumed to be small enough compared to the distance between the centers of the Cauchy distributions. 4 A typical figure of these two Cauchy distributions is depicted in Fig. 1.…”
Section: System Model and Preliminariesmentioning
confidence: 99%
“…3 Moreover, the parameter , which determines the width of the Cauchy distribution, is assumed to be small enough compared to the distance between the centers of the Cauchy distributions. 4 A typical figure of these two Cauchy distributions is depicted in Fig. 1.…”
Section: System Model and Preliminariesmentioning
confidence: 99%
“…(3) is a very convenient property which will help us deriving posterior distributions of the parameters in the implementation of the Gibbs sampler. The Student t can be interpreted as an infinite sum of Gaussians, which contrasts with the finite sums of Gaussians used in [3,4]. In the following, we note…”
Section: )mentioning
confidence: 99%
“…More specifically, in [3,4] the coefficients of the representations of the sources in the dictionary are given a discrete mixture a Gaussian distributions with 2 or 3 states (one Gaussian with very small variance, the other(s) with big variance) and a probabilistic framework is presented for the estimation of the mixing matrix and the sources. In particular, in [4], the authors use EM optimisation and present results with speech signals decomposed on a MDCT orthogonal basis [5].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the audio domain, sparse prior distributions are usually a Laplacian [1], a generalized Gaussian [2], a Student-t [3], or a mixture of two Gaussians [4].…”
Section: Introductionmentioning
confidence: 99%