2019
DOI: 10.1007/s10514-018-09826-z
|View full text |Cite
|
Sign up to set email alerts
|

Gaussian process decentralized data fusion meets transfer learning in large-scale distributed cooperative perception

Abstract: This paper presents novel Gaussian process decentralized data fusion algorithms exploiting the notion of agent-centric support sets for distributed cooperative perception of largescale environmental phenomena. To overcome the limitations of scale in existing works, our proposed algorithms allow every mobile sensing agent to choose a different support set and dynamically switch to another during execution for encapsulating its own data into a local summary that, perhaps surprisingly, can still be assimilated wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 28 publications
0
13
0
Order By: Relevance
“…Empirical evaluation on two real-world datasets reveals that the stochastic variant of our VBPIC can significantly outperform existing state-of-the-art GP models, thus demonstrating its robustness to overfitting due to Bayesian model selection while preserving scalability to big data through stochastic optimization. For our future work, we plan to integrate our proposed framework with that of decentralized/distributed data/model fusion [37]- [41] for collective online learning of a massive number of VBSGPR models.…”
Section: Discussionmentioning
confidence: 99%
“…Empirical evaluation on two real-world datasets reveals that the stochastic variant of our VBPIC can significantly outperform existing state-of-the-art GP models, thus demonstrating its robustness to overfitting due to Bayesian model selection while preserving scalability to big data through stochastic optimization. For our future work, we plan to integrate our proposed framework with that of decentralized/distributed data/model fusion [37]- [41] for collective online learning of a massive number of VBSGPR models.…”
Section: Discussionmentioning
confidence: 99%
“…Hence, local approximation not only distributes the computations but also captures quickvarying features well. Hybrid strategies thereafter have been presented for taking advantages of both global and local approximations [42]- [44].…”
Section: Introductionmentioning
confidence: 99%
“…The block-dependent PIC however may produce discontinuous predictions on block boundaries [43]. Recently, the stochastic/distributed variants of FI(T)C and PIC have been developed to further improve the scalability [29], [31], [44], [50].…”
Section: Introductionmentioning
confidence: 99%
“…This paradigm can potentially allow the richness and expressive power of GP models (Rasmussen and Williams 2006) (Section 2) to be exploited by multiple predictive inference agents for distributed inference of the complex latent behavior underlying all their local data. Such a prospect has inspired the recent development of distributed GP fusion algorithms (Allamraju and Chowdhary 2017;Chen et al 2012;Ouyang and Low 2018): Essentially, the "client" agents encapsulate their own local data into memory-efficient summary statistics based on a common set of fixed/known GP hy-perparameter settings and inducing inputs and communicate them to some "server" agent(s) to be fused into globally consistent summary statistics that are sent back to the "client" agents for GP predictive inference. These distributed GP fusion algorithms inherit the advantage of being adjustably lightweight by restricting the number of inducing inputs (hence the size of the local and global summary statistics) to fit the agents' limited computational and communication capabilities at the expense of predictive accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…However, such algorithms fall short of achieving the truly decentralized GP fusion necessary for scaling up to a massive number of agents grounded in the real world (e.g., traffic sensing, modeling, and prediction by autonomous vehicles cruising in urban road networks (Chen et al 2015;Low et al 2015a;Hoang et al 2014;Min and Wynter 2011;Ouyang et al 2014;Wang and Papageorgiou 2005;Work et al 2010), distributed inference on a network of IoTs, surveillance cameras and mobile devices/robots (Kang and Larkin 2016;Natarajan et al 2014;Hoang et al 2018b;Zhang et al 2016)) due to the following critical issues: (a) An obvious limitation is the single point(s) of failure with the server agent(s) whose computational and communication capabilities must be superior and robust (e.g., against transmission loss); (b) different GP inference agents are likely to gather data of varying behaviors and correlation structure from possibly separate localities of the input domain (e.g., spatiotemporal) and would therefore incur considerable information loss due to summarization based on a common set of fixed/known GP hyperparameter settings and inducing inputs, especially when the inducing inputs are few and far from the data (in the correlation sense); and (c) like distributed GP models, distributed GP fusion algorithms implicitly assume a one-time processing of a fixed set of data and would hence repeat the entire fusion process involving all local data gathered by the agents whenever new batches of streaming data arrive, which is prohibitively expensive. To overcome these limitations, this paper presents a novel Collective Online Learning of GPs (COOL-GP) framework for enabling a massive number of agents to simultaneously perform (a) efficient online updates of their GP models using their local streaming data with varying correlation structures and (b) decentralized fusion of their resulting online GP models with different learned hyperparameter settings and inducing inputs residing in the original input domain.…”
Section: Introductionmentioning
confidence: 99%