2009
DOI: 10.1016/j.sigpro.2008.12.005
|View full text |Cite
|
Sign up to set email alerts
|

K-hyperline clustering learning for sparse component analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
56
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(56 citation statements)
references
References 36 publications
(53 reference statements)
0
56
0
Order By: Relevance
“…The sparse components of s(t) satisfy disjoint orthogonality condition, i.e., s i ðtÞs j ðtÞ % 0 ði; j A f1; 2; …; ngÞ. Under this strict assumption, the clustering problem can be converted into solving the following optimization problem [13]:…”
Section: Background Of K-hyperlines Clusteringmentioning
confidence: 99%
See 1 more Smart Citation
“…The sparse components of s(t) satisfy disjoint orthogonality condition, i.e., s i ðtÞs j ðtÞ % 0 ði; j A f1; 2; …; ngÞ. Under this strict assumption, the clustering problem can be converted into solving the following optimization problem [13]:…”
Section: Background Of K-hyperlines Clusteringmentioning
confidence: 99%
“…Moreover, He et al [13] designed a so-called K-hyperline clustering (K-HLC) learning algorithm to improve the performance of K-SVD algorithm [14,15], in which the procedure of mixing matrix identification is composed of two stages: hyperline identification and hyperline number detection. For the first stage, the K-means clustering method [16] cooperated with the eigenvalue decomposition (EVD) is used to cluster and find each hyperline from the corresponding cluster set; for the second stage, an eigenvalue gap-based detection method [17][18][19] is employed to ensure the true number of sources.…”
Section: Introductionmentioning
confidence: 99%
“…In practical situations, however, an underdetermined separation problem is usually encountered. A widely used method for tackling this problem is based on sparse signal representations [22][23][24][25][26][27][28][29], where the sources are assumed to be sparse in either the time domain or a transform domain such that the overlap between the sources at each time instant (or time-frequency point) is minimal. Audio signals (such as music and speech) become sparser when transformed into the time-frequency domain, therefore, using such a representation, each source within the mixture can be identified based on the probability of each time-frequency point of the mixture that is dominated by a particular source, using either sparse coding [26], or time-frequency masking [30] [31] [33], based on the evaluation of various cues from the mixtures, including e.g.…”
Section: Introductionmentioning
confidence: 99%
“…Several novel algorithms have been developed recently, such as TIFROM (TIme-Frequency Ratio Of Mixtures) [7], DEMIX [8] and uniform clustering [9], that try to overcome some weak points of basic algorithms. These improvements focus on making more efficient clustering in the attenuationrate and delay-time spaces.…”
Section: Introductionmentioning
confidence: 99%