2021
DOI: 10.1162/neco_a_01378
|View full text |Cite
|
Sign up to set email alerts
|

On the Achievability of Blind Source Separation for High-Dimensional Nonlinear Source Mixtures

Abstract: For many years, a combination of principal component analysis (PCA) and independent component analysis (ICA) has been used for blind source separation (BSS). However, it remains unclear why these linear methods work well with real-world data that involve nonlinear source mixtures. This work theoretically validates that a cascade of linear PCA and ICA can solve a nonlinear BSS problem accurately—when the sensory inputs are generated from hidden sources via nonlinear mappings with sufficient dimensionality. Our … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 64 publications
0
8
0
Order By: Relevance
“…Methods C and D prove that PredPCA inherits preferable properties of both the standard PCA and AR models, and outperforms naïve PCA and AR models in the robustness to noise and generalization of prediction. Methods E and F demonstrate that PredPCA identifies the optimal hidden state estimator and the true system parameters with a global convergence guarantee, owing to asymptotic property of linear neural networks with high-dimensional inputs [33]. Methods G provides the simulation protocols.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Methods C and D prove that PredPCA inherits preferable properties of both the standard PCA and AR models, and outperforms naïve PCA and AR models in the robustness to noise and generalization of prediction. Methods E and F demonstrate that PredPCA identifies the optimal hidden state estimator and the true system parameters with a global convergence guarantee, owing to asymptotic property of linear neural networks with high-dimensional inputs [33]. Methods G provides the simulation protocols.…”
Section: Methodsmentioning
confidence: 99%
“…In particular, we demonstrated the following two key properties: (1) It is mathematically guaranteed that PredPCA can identify the optimal (explained below) hidden state representation and parameter estimators̶up to a linear transformation that does not affect prediction accuracy̶for general linear systems and, asymptotically, even for nonlinear systems (Methods E, F). While using a linear neural network for the encoding, the asymptotic linearization theorem [33] ensures that PredPCA will extract true hidden states when the mappings from hidden states to sensory inputs are sufficiently high-dimensional. Briefly, this is because projecting the high-dimensional input onto the directions of its major eigenvectors effectively magnifies the linearly transformed components of the hidden states included in the input, while filtering out the nonlinear components (see Methods E for its mathematical statement and the conditions for application; see [33] for the mathematical proof).…”
Section: Key Analytical Discoveriesmentioning
confidence: 99%
See 3 more Smart Citations