2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015
DOI: 10.1109/icassp.2015.7177971
|View full text |Cite
|
Sign up to set email alerts
|

Relative group sparsity for non-negative matrix factorization with application to on-the-fly audio source separation

Abstract: We consider dictionary-based signal decompositions with group sparsity, a variant of structured sparsity. We point out that the group sparsity-inducing constraint alone may not be sufficient in some cases when we know that some bigger groups or so-called supergroups cannot vanish completely. To deal with this problem we introduce the notion of relative group sparsity preventing the supergroups from vanishing. In this paper we formulate practical criteria and algorithms for relative group sparsity as applied to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…This article extends our preliminary work [24], [27] by providing the algorithms along with their mathematical derivations in addition to new results from a user test. Altogether, the main contributions of our proposed on-the-fly paradigm work are four-fold:…”
Section: Introductionmentioning
confidence: 97%
See 1 more Smart Citation
“…This article extends our preliminary work [24], [27] by providing the algorithms along with their mathematical derivations in addition to new results from a user test. Altogether, the main contributions of our proposed on-the-fly paradigm work are four-fold:…”
Section: Introductionmentioning
confidence: 97%
“…prevents them from vanishing entirely). In other words, the group sparsity property is now considered relative to the corresponding supergroup H (j) and not within the full set of coefficients in H. It is formulated as [27] …”
Section: Group Sparsitymentioning
confidence: 99%
“…The term universal model is also in analogy to the universal background models for speaker verification addressed in [10]. This idea of using a generic spectral model was then exploited in the context of on-the-fly source separation [11,12] where any kind of audio sources can be separated with the guidance from its examples collected from a search engine. Motivated from those above-mentioned works, we propose in this paper to learn two generic spectral models for speech and background noise independently in advance.…”
Section: Introductionmentioning
confidence: 99%
“…Firstly, compared to [8] where only the universal speech model was pre-learned and noise model was adapted during the separation process, we consider to learn the universal noise model also since noisy examples can be easily collected in advance and it would potentially improve the separation quality. Secondly, compared to [8] and [11,12] where either block sparsity-inducing penalty or component-sparsity-inducing penalty was used, we propose in this paper a combination of these two penalties which would offer better estimating the parameters in the model fitting.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, some modified group sparsity constraints have been proposed to improve the performance. For example, Badawy proposed relative group sparsity [13] to prevent the activations corresponding to one universal source model from vanishing altogether. Hurmalainen introduced a quadratic penalty function into group sparsity that permits dynamic relationships between basis vectors or groups, since the basic form of group sparsity assumes the independence of different groups without considering which groups will activate, alone or together [14].…”
Section: Introductionmentioning
confidence: 99%