2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018
DOI: 10.1109/icassp.2018.8462095
|View full text |Cite
|
Sign up to set email alerts
|

Blind Source Separation Using Mixtures of Alpha-Stable Distributions

Abstract: We propose a new blind source separation algorithm based on mixtures of α-stable distributions. Complex symmetric α-stable distributions have been recently showed to better model audio signals in the time-frequency domain than classical Gaussian distributions thanks to their larger dynamic range. However, inference with these models is notoriously hard to perform because their probability density functions do not have a closed-form expression in general. Here, we introduce a novel method for estimating mixture… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…To solve clustering and density modeling tasks in a compressive manner, previous works focused on random Fourier features (RFF), which consist in using the complex exponential as the nonlinear function [44] in the feature map. This has been applied to clustering and fitting parametric mixture models, such as Gaussian mixture models [33] or alpha-stable distributions [35].…”
Section: Definition 2 (K-means Clustering Task) Given An Integer > 0 K-means Clustering Consists In Finding Centroidsmentioning
confidence: 99%
“…To solve clustering and density modeling tasks in a compressive manner, previous works focused on random Fourier features (RFF), which consist in using the complex exponential as the nonlinear function [44] in the feature map. This has been applied to clustering and fitting parametric mixture models, such as Gaussian mixture models [33] or alpha-stable distributions [35].…”
Section: Definition 2 (K-means Clustering Task) Given An Integer > 0 K-means Clustering Consists In Finding Centroidsmentioning
confidence: 99%
“…The parameters can then be extracted by optimizing a cost function as explained later. This approach has been applied to audio sourceseparation [12] as well as speaker verification [10] (see Box 4), for which it was shown that 1000 hours of speech can be compressed down to a few kilobytes without loss of verification performance.…”
Section: C) Gaussian-mixture Modelingmentioning
confidence: 99%
“…In [10] and [12], applications of sketched learning are demonstrated on speaker verification (see Box 4) and source separation.…”
Section: Learning From a Sketchmentioning
confidence: 99%
See 1 more Smart Citation
“…An algorithm for learning patch prior from a sketch : LR-COMP (Low-Rank Continuous Orthogonal Matching Pursuit). Problem (4.7) can be solved approximately using the greedy Compressive Learning OMP called CL-OMP and a variation of CL-OMP called CL-OMP with Replacement (CL-OMPR) [25,26]. These algorithms are based on the Matching Pursuit [34], Orthogonal Matching Pursuit [39] and Orthogonal Matching Pursuit with Replacement [23] for classical compressive sensing, which handle sparse approximation problems.…”
mentioning
confidence: 99%