2004
DOI: 10.1007/978-3-540-30110-3_55
|View full text |Cite
|
Sign up to set email alerts
|

Soft-LOST: EM on a Mixture of Oriented Lines

Abstract: Abstract. Robust clustering of data into overlapping linear subspaces is a common problem. Here we consider one-dimensional subspaces that cross the origin. This problem arises in blind source separation, where the subspaces correspond directly to columns of a mixing matrix. We present an algorithm that identifies these subspaces using an EM procedure, where the E-step calculates posterior probabilities assigning data points to lines and M-step repositions the lines to match the points assigned to them. This m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
35
0

Year Published

2005
2005
2017
2017

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 48 publications
(36 citation statements)
references
References 6 publications
1
35
0
Order By: Relevance
“…In Table I, the mean of the performance index depending on various parameters is presented. Noting that the mean error when using random (2 3)-matrices with coefficients uniformly taken from [ 1,1] is , we observe good performance, especially for a larger kernel radius and higher approximation parameter , also compared with Soft-LOST's [6]. As an example, in higher mixture dimension, three speech signals are mixed by a column-normalized (3 3)-mixing matrix .…”
Section: Resultsmentioning
confidence: 73%
See 2 more Smart Citations
“…In Table I, the mean of the performance index depending on various parameters is presented. Noting that the mean error when using random (2 3)-matrices with coefficients uniformly taken from [ 1,1] is , we observe good performance, especially for a larger kernel radius and higher approximation parameter , also compared with Soft-LOST's [6]. As an example, in higher mixture dimension, three speech signals are mixed by a column-normalized (3 3)-mixing matrix .…”
Section: Resultsmentioning
confidence: 73%
“…However, how to do this algorithmically is far from obvious, and although quite a few algorithms have been proposed recently [4]-[6], performance is yet limited. The most commonly used overcomplete algorithms rely on sparse sources (after possible sparsification by preprocessing), which can be identified by clustering, usually by -means or some extension [5], [6]. However, apart from the fact that theoretical justifications have not been found, mean-based clustering only identifies the correct if the data density approaches a delta distribution.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Each source is modeled by (7). We also assume that the PDOA for an observed mixture follows a Gaussian mixture model (GMM):…”
Section: Probabilistic Modelmentioning
confidence: 99%
“…In order to avoid one cluster being modeled by two or more Gaussians, thus making it possible to estimate the number of sources correctly, we propose utilizing a sparse distribution modeled by the Dirichlet distribution as the prior of the GMM mixture weight. The authors of [6,7] also derived the EM algorithm, however, they still needed to know the number of sources N s in advance. On the other hand, our proposed algorithm does not require information on the source number, thanks to the weight prior.…”
Section: Introductionmentioning
confidence: 99%