2019
DOI: 10.1609/aaai.v33i01.33017850
|View full text |Cite
|
Sign up to set email alerts
|

Collective Online Learning of Gaussian Processes in Massive Multi-Agent Systems

Abstract: This paper presents a novel Collective Online Learning of Gaussian Processes (COOL-GP) framework for enabling a massive number of GP inference agents to simultaneously perform (a) efficient online updates of their GP models using their local streaming data with varying correlation structures and (b) decentralized fusion of their resulting online GP models with different learned hyperparameter settings and inducing inputs. To realize this, we exploit the notion of a common encoding structure to encapsulate the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
4

Relationship

2
8

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 12 publications
0
13
0
Order By: Relevance
“…As a future work, we plan to explore techniques to automatically optimize the distribution P N used by FTS to sample agents by learning the similarity between each agent and the target agent (i.e., the fidelity of each agent). Other than the RFF approximation used in this work, other approximation techniques for GP (such as those based on inducing points [5,6,7,8,18,19,20,21,23,37,38,41,53,57,59,60]) may also be used to derive the parameters to be exchanged between agents, which is worth exploring in future works. Moreover, in our experiments, the hyperparameters of the target agent's GP is learned by maximizing the marginal likelihood; it would be interesting to explore whether the GP hyperparameters can also be shared among the agents, which can potentially facilitate better collaboration.…”
Section: Discussionmentioning
confidence: 99%
“…As a future work, we plan to explore techniques to automatically optimize the distribution P N used by FTS to sample agents by learning the similarity between each agent and the target agent (i.e., the fidelity of each agent). Other than the RFF approximation used in this work, other approximation techniques for GP (such as those based on inducing points [5,6,7,8,18,19,20,21,23,37,38,41,53,57,59,60]) may also be used to derive the parameters to be exchanged between agents, which is worth exploring in future works. Moreover, in our experiments, the hyperparameters of the target agent's GP is learned by maximizing the marginal likelihood; it would be interesting to explore whether the GP hyperparameters can also be shared among the agents, which can potentially facilitate better collaboration.…”
Section: Discussionmentioning
confidence: 99%
“…We will also consider our outsourced setting in the active learning context (Cao et al, 2013;Hoang et al, 2014a;Low et al, 2008;2009;Ouyang et al, 2014;Zhang et al, 2016). For applications with a huge budget of function evaluations, we like to couple PO-GP-UCB with the use of distributed/decentralized (Chen et al, 2012;2013a;Hoang et al, 2016;2019b;a;Low et al, 2015;Ouyang & Low, 2018) or online/stochastic (Hoang et al, 2015;Low et al, 2014b;Xu et al, 2014;Teng et al, 2020;Yu et al, 2019a;…”
Section: Discussionmentioning
confidence: 99%
“…Meta learning and model fusion. Meta learning and model fusion, e.g., [15] [23] [51], aim to adapt pre-trained models of related tasks to a new task where synchronized training is not possible. [23] applies knowledge transfer from black-box models to new tasks by learning to capture the correspondence between latent representation that represent the embeddings of different models.…”
Section: Related Workmentioning
confidence: 99%