2022 25th International Conference on Information Fusion (FUSION) 2022
DOI: 10.23919/fusion49751.2022.9841257
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Factorisation-based Gaussian Process Approaches for Online Tracking

Abstract: Target tracking often relies on complex models with non-stationary parameters. Gaussian process (GP) is a modelfree method that can achieve accurate performance. However, the inverse of the covariance matrix poses scalability challenges. Since the covariance matrix is typically dense, direct inversion and determinant evaluation methods suffer from cubic complexity to data size. This bottleneck limits the GP for long-term tracking or high-speed tracking. We present an efficient factorisationbased GP approach wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…The reduction in computational complexity of DGP can also be justified by looking into (22), where both the computations of determinant and inversion are only based on a much smaller matrix Σ (i) . In addition, as compared to (21), the factorized marginal likelihood can potentially be maximized in a decentralized manner like federated learning [57] since it is a summation over local marginal likelihood functions.…”
Section: Hyperparameter Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…The reduction in computational complexity of DGP can also be justified by looking into (22), where both the computations of determinant and inversion are only based on a much smaller matrix Σ (i) . In addition, as compared to (21), the factorized marginal likelihood can potentially be maximized in a decentralized manner like federated learning [57] since it is a summation over local marginal likelihood functions.…”
Section: Hyperparameter Learningmentioning
confidence: 99%
“…To cope with the clutter and for learning the GP hyperparameters, a method is designed to assign weights to different measurements based on the marginal likelihood (21). A weighted summation over measurements collected by one sensor is then calculated as the training data for DGP.…”
Section: Dgp-based Data Associationmentioning
confidence: 99%
See 3 more Smart Citations