2007
DOI: 10.1016/j.dsp.2007.02.009
|View full text |Cite
|
Sign up to set email alerts
|

Blind separation of nonlinear mixtures by variational Bayesian learning

Abstract: Blind separation of sources from nonlinear mixtures is a challenging and often illposed problem. We present three methods for solving this problem: an improved nonlinear factor analysis (NFA) method using multilayer perceptron (MLP) network to model the nonlinearity, a hierarchical NFA (HNFA) method suitable for larger problems and a post-nonlinear NFA (PNFA) method for more restricted post-nonlinear mixtures. The methods are based on variational Bayesian learning, which provides the needed regularisation and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2007
2007
2016
2016

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…Alternatively, maximization after marginalizing s out of the posterior is thought to prevent overfitting by avoiding arbitrary peaks of the joint posterior, leading to a better estimate of the posterior mass concentration. Variational Bayes (VB) approaches, such as ensemble learning [53], [54], carry out marginalization while approximating the joint posterior by a product of factors, each corresponding to a subset of the model (and noise) parameters θ . Marginalization can also simplify to the likelihood function when a “constant” prior is used for θ .…”
Section: A Unified Framework For Subspace Modeling and Developmentmentioning
confidence: 99%
“…Alternatively, maximization after marginalizing s out of the posterior is thought to prevent overfitting by avoiding arbitrary peaks of the joint posterior, leading to a better estimate of the posterior mass concentration. Variational Bayes (VB) approaches, such as ensemble learning [53], [54], carry out marginalization while approximating the joint posterior by a product of factors, each corresponding to a subset of the model (and noise) parameters θ . Marginalization can also simplify to the likelihood function when a “constant” prior is used for θ .…”
Section: A Unified Framework For Subspace Modeling and Developmentmentioning
confidence: 99%
“…Several extensions of the basic ICA model have been proposed to improve its performance; they are commonly referred to as semi-blind independent component analysis (see [5] for a review). The main approaches exploit nonlinear mixtures [20], temporal correlation [4], positivity [9], or sparsity [19,23,34]. An approach to modelling class-conditional densities based on an IFA model was also recently introduced in [25].…”
Section: Introductionmentioning
confidence: 99%
“…Bayesian NLICA methods were proposed in [62] and [63]. Some tests were conduced in [64] to compare the experimental recovery performance of both Bayesian and structural constrained algorithms (PNL model) for nonlinear mixed signals.…”
Section: As Described In Equation 26mentioning
confidence: 99%