2021
DOI: 10.1137/19m129526x
|View full text |Cite
|
Sign up to set email alerts
|

Compression, Inversion, and Approximate PCA of Dense Kernel Matrices at Near-Linear Computational Complexity

Abstract: Dense kernel matrices Θ ∈ R N ×N obtained from point evaluations of a covariance function G at locations {x i } 1≤i≤N arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset S ⊂ {1, . . . , N } × {1, . . . , N }, with #S = O N log(N ) log d (N/ ) , such that the zero fill-in block-incomplete Cholesky decomposition of Θ i,j 1 (i,j)∈S is an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 39 publications
(29 citation statements)
references
References 110 publications
(183 reference statements)
1
28
0
Order By: Relevance
“…First, an increasing number of data points, O(kJ), is problematic for the standard cubic complexity GP implemented here. In this case, a more sophisticated (non-cubic complexity) emulator or using sparse approximate linear algebra techniques (Schäfer et al, 2021) would be beneficial. Second, trying to emulate a d-dimensional function F − G is difficult if the number of evaluation points is not sufficient.…”
Section: Discussionmentioning
confidence: 99%
“…First, an increasing number of data points, O(kJ), is problematic for the standard cubic complexity GP implemented here. In this case, a more sophisticated (non-cubic complexity) emulator or using sparse approximate linear algebra techniques (Schäfer et al, 2021) would be beneficial. Second, trying to emulate a d-dimensional function F − G is difficult if the number of evaluation points is not sufficient.…”
Section: Discussionmentioning
confidence: 99%
“…. , s n , and hence the columns of Y, are ordered according to a maximin ordering (Guinness, 2018;Schäfer et al, 2021b), which sequentially selects each location in the ordering to maximize the minimum distance from locations already selected (see Figure 2).…”
Section: Sparse Inverse Cholesky Approximation For Spatial Datamentioning
confidence: 99%
“…Our model can be viewed as a nonparametric extension of the Vecchia approach, as regularized inference on a sparse Cholesky factor of the precision matrix, or as a series of Bayesian linear regression or spatial prediction problems. We specify prior distributions that are motivated by recent results (Schäfer et al, 2021b,a) on the exponential decay of the entries of the inverse Cholesky factor for Matérn-type covariances under a maximum-minimum-distance ordering of the spatial locations (Guinness, 2018;Schäfer et al, 2021b). Thus, we obtain a highly flexible method that enforces neither stationary nor parametric covariance structures, but instead regularizes the estimation and accounts for uncertainty via Bayesian priors.…”
Section: Introductionmentioning
confidence: 99%
“…For comparison, direct Gaussian conditioning on the information in Lemma 2.1 would incur a higher computational cost of O(n 3 m 3 ), but would provide the joint distribution over the solution u(t, •) at all times t ∈ [0, T ]. Although we do not pursue it in this paper, in the latter case the grid structure present in t and x could be exploited to mitigate the O(n 3 m 3 ) cost; for example, a compactly supported covariance model would reduce the cost by a constant factor (Gneiting 2002), or if the preconditions of Schäfer et al (2021) are satisfied then their approach would reduce the cost to O(nm log(nm) log d+1 (nm/ )) at the expense of introducing an error of O( ). See also the recent work of de Roos et al (2021).…”
Section: Remark 21 (Bayesian Interpretation)mentioning
confidence: 99%