2004
DOI: 10.1162/089976604773717595
|View full text |Cite
|
Sign up to set email alerts
|

Minimax Mutual Information Approach for Independent Component Analysis

Abstract: Minimum output mutual information is regarded as a natural criterion for independent component analysis (ICA) and is used as the performance measure in many ICA algorithms. Two common approaches in information theoretic ICA algorithms are minimum mutual information and maximum output entropy approaches. In the former approach, we substitute some form of probability density function (pdf) estimate into the mutual information expression, and in the latter we incorporate the source pdf assumption in the algorithm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
28
0

Year Published

2004
2004
2019
2019

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 52 publications
(29 citation statements)
references
References 30 publications
(41 reference statements)
1
28
0
Order By: Relevance
“…The natural cost in this context that leads to ICA is the mutual information among separated components, which can be shown to be equivalent to maximum likelihood estimation, and to negentropy maximization [40,59,61,64] when we constrain the demixing matrix to be orthogonal. In these approaches, one either estimates a parametric density model [61,63,65,85] along with the demixing matrix, or maximizes the information transferred in a network of non-linear units [58,67], or estimates the entropy using a parametric or nonparametric approach [58,63,68,69]. A recent semi-parametric approach uses the maximum entropy bound to estimate the entropy given the observations, and uses a numerical procedure thus resulting in accurate estimates for the entropy [42].…”
Section: Glm Group Analysismentioning
confidence: 99%
“…The natural cost in this context that leads to ICA is the mutual information among separated components, which can be shown to be equivalent to maximum likelihood estimation, and to negentropy maximization [40,59,61,64] when we constrain the demixing matrix to be orthogonal. In these approaches, one either estimates a parametric density model [61,63,65,85] along with the demixing matrix, or maximizes the information transferred in a network of non-linear units [58,67], or estimates the entropy using a parametric or nonparametric approach [58,63,68,69]. A recent semi-parametric approach uses the maximum entropy bound to estimate the entropy given the observations, and uses a numerical procedure thus resulting in accurate estimates for the entropy [42].…”
Section: Glm Group Analysismentioning
confidence: 99%
“…Due to the independence assumptions, the following identities hold. (12) Similar to the reasoning in Fact 1, we can show that (12) becomes zero if and only if g(x,w)=f(x) for all x in the support of p X (.). Hence, minimizing (12) is necessary and sufficient for exact function matching.…”
Section: Marginal Distribution Based Criteriasupporting
confidence: 60%
“…The coefficients of the polynomials can be estimated using the maximum likelihood principle or alternative analytical solutions such as Jaynes' maximum entropy principle [11,12]. Furthermore, in the application phase, the second term can be approximately optimized using a stochastic gradient approach where each new unlabeled input sample is utilized for a single-sample update.…”
Section: Algorithmic Possibilitiesmentioning
confidence: 99%
“…The minimax mutual information ICA algorithm [35] is an efficient and robust ICA algorithm motivated by the maximum entropy principle. The optimality criterion is the minimum output mutual information, where the estimated pdfs are from the exponential family and are approximate solutions to a constrained entropy maximization problem.…”
Section: Other Methodsmentioning
confidence: 99%