2014
DOI: 10.1109/tcyb.2013.2281332
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Source Separation Using Maximum a Posteriori Nonnegative Matrix Factorization

Abstract: A novel unsupervised machine learning algorithm for single channel source separation is presented. The proposed method is based on nonnegative matrix factorization, which is optimized under the framework of maximum a posteriori probability and Itakura-Saito divergence. The method enables a generalized criterion for variable sparseness to be imposed onto the solution and prior information to be explicitly incorporated through the basis vectors. In addition, the method is scale invariant where both low and high … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(9 citation statements)
references
References 24 publications
0
9
0
Order By: Relevance
“…However, the method requires statistical independence on the waveform within the duration of signal capture and further process of pattern identification is also demanded which is often violated in practice. Instead, we use a fixed-length segment drawn from transient response [53,54], such that continuous transient slices of length N can be chopped out of a set of image sequences from t to…”
Section: Thermography Sparse Pattern Extraction 1) Observation Modelmentioning
confidence: 99%
“…However, the method requires statistical independence on the waveform within the duration of signal capture and further process of pattern identification is also demanded which is often violated in practice. Instead, we use a fixed-length segment drawn from transient response [53,54], such that continuous transient slices of length N can be chopped out of a set of image sequences from t to…”
Section: Thermography Sparse Pattern Extraction 1) Observation Modelmentioning
confidence: 99%
“…Approximate sparsity is an important consideration as they represent important information. Many sparse solutions have been proposed in the last decade [19][20][21][22][23][24][25]. Nonetheless, the optimal sparse solution remains an open issue.…”
Section: Introductionmentioning
confidence: 99%
“…A sparseness constraint can be added to the cost function [ 26 , 27 , 28 , 29 , 30 , 31 ], and this can be achieved by regularization using the L 1 -norm leading to Sparse NMF (SNMF). Here, “sparseness” refers to a representational scheme where only a few units (out of a large population) are effectively used to represent typical data vectors.…”
Section: Introductionmentioning
confidence: 99%