2019
DOI: 10.48550/arxiv.1905.09944
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised Discovery of Temporal Structure in Noisy Data with Dynamical Components Analysis

Abstract: Linear dimensionality reduction methods are commonly used to extract lowdimensional structure from high-dimensional data. However, popular methods disregard temporal structure, rendering them prone to extracting noise rather than meaningful dynamics when applied to time series data. At the same time, many successful unsupervised learning methods for temporal, sequential and spatial data extract features which are predictive of their surrounding context. Combining these approaches, we introduce Dynamical Compon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…The general case remains unsolved, and is obviously even harder than the above-mentioned 1-vector autoencoding problem. The recent work [6,7] review the state-of-the art as well as presenting Contrastive Predictive Coding and Dynamic Component Analysis, powerful new distillation techniques for time series, following the long tradition of setting f = g even though this is provably not optimal for the Gaussian case as shown in [8].…”
Section: Random What Ismentioning
confidence: 99%
See 1 more Smart Citation
“…The general case remains unsolved, and is obviously even harder than the above-mentioned 1-vector autoencoding problem. The recent work [6,7] review the state-of-the art as well as presenting Contrastive Predictive Coding and Dynamic Component Analysis, powerful new distillation techniques for time series, following the long tradition of setting f = g even though this is provably not optimal for the Gaussian case as shown in [8].…”
Section: Random What Ismentioning
confidence: 99%
“…While fine-grained binning has no effect on the entropy H(Y ) and negligible effect on I(W, Y ), it dramatically reduces the entropy of our data. Whereas H(W ) = ∞ since W is continuous 7 , H(W ) = log N is finite, approaching infinity only in the limit of infinitely many infinitesimal bins. Taken together, these scalings of I and H imply that the leftmost part of the Pareto frontier I * (H * ), defined by equation ( 1) and illustrated in Figure 1, asymptotes to a horizontal line of height I * = I(X, Y ) as H * → ∞.…”
Section: The Pareto Frontier Is Spanned By Contiguous Binningsmentioning
confidence: 99%
“…Alternatively, since the goal of GPFA is to maximize the predictability of their DLVs, we adopt the predictive information instead of the evaluation index. [33,37] Given a temporal series X ¼ x t f g n t¼1 , x t ℝ m with its probability density function (pdf) denoted as P(X), define…”
Section: Dimensionality Reduction Of Gpfamentioning
confidence: 99%
“…In GPFA, the weight matrix W is obtained when the predictive information in the latent scores s t = W > x t is maximized, which coincides with the definition of PI pred T S ð Þ. To obtain the explicit expression of PI pred T S ð Þ, inspired by the denotations in Clark et al, [33] a spatiotemporal covariance matrix Σ T (X) is defined to encode all second-order statistics of X across T time steps.…”
Section: Dimensionality Reduction Of Gpfamentioning
confidence: 99%
See 1 more Smart Citation