2016
DOI: 10.1609/aaai.v30i1.10210
|View full text |Cite
|
Sign up to set email alerts
|

Gaussian Process Planning with Lipschitz Continuous Reward Functions: Towards Unifying Bayesian Optimization, Active Learning, and Beyond

Abstract: This paper presents a novel nonmyopic adaptive Gaussian process planning (GPP) framework endowed with a general class of Lipschitz continuous reward functions that can unify some active learning/sensing and Bayesian optimization criteria and offer practitioners some flexibility to specify their desired choices for defining new tasks/problems. In particular, it utilizes a principled Bayesian sequential decision problem framework for jointly and naturally optimizing the exploration-exploitation trade-off. In gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
7
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
3
1

Relationship

4
6

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…Our proposed acquisition function achieves a competitive performance in comparison with existing acquisition functions for BO in optimizing synthetic benchmark functions, an environmental field, and in hyperparameter tuning of logistic regression model and CNN. We will consider generalizing our framework to nonmyopic BO (Kharkovskii, Ling, and Low 2020;Ling, Low, and Jaillet 2016), batch BO (Daxberger and Low 2017), highdimensional BO (Hoang, Hoang, and Low 2018), and multifidelity BO (Zhang, Dai, and Low 2019) settings.…”
Section: Discussionmentioning
confidence: 99%
“…Our proposed acquisition function achieves a competitive performance in comparison with existing acquisition functions for BO in optimizing synthetic benchmark functions, an environmental field, and in hyperparameter tuning of logistic regression model and CNN. We will consider generalizing our framework to nonmyopic BO (Kharkovskii, Ling, and Low 2020;Ling, Low, and Jaillet 2016), batch BO (Daxberger and Low 2017), highdimensional BO (Hoang, Hoang, and Low 2018), and multifidelity BO (Zhang, Dai, and Low 2019) settings.…”
Section: Discussionmentioning
confidence: 99%
“…Empirical evaluation on both synthetic and real-world experiments show that our DEC-HBO algorithm performs competitively to the state-of-the-art centralized BO and HBO algorithms while providing a significant computational advantage for high-dimensional optimization problems. For future work, we plan to generalize DEC-HBO to batch mode (Daxberger and Low 2017) and the nonmyopic context by appealing to existing literature on nonmyopic BO (Ling, Low, and Jaillet 2016) and active learning (Cao, Low, and Dolan 2013;Hoang et al 2014a;2014b;Low, Dolan, and Khosla 2008;2009; as well as to be performed by a multi-robot team to find hotspots in environmental sensing/monitoring by seeking inspiration from existing literature on multirobot active sensing/learning Chen et al 2012;Low et al 2012;Ouyang et al 2014). For applications with a huge budget of function evaluations, we like to couple DEC-HBO with the use of parallel/distributed Hoang, Hoang, and Low 2016;Low et al 2015) and online/stochastic (Hoang, Hoang, and Low 2015;Xu et al 2014) sparse GP models to represent the belief of f efficiently.…”
Section: Discussionmentioning
confidence: 99%
“…Empirical evaluation on three real-world datasets shows that our approximation algorithm m-Greedy outperforms existing algorithms for active learning of MOGP and single-output GP models, especially when measurements of the target phenomenon are more noisy than that of the auxiliary types. For our future work, we plan to extend our approach by generalizing non-myopic active learning (Cao, Low, and Dolan 2013;Hoang et al 2014;Ling, Low, and Jaillet 2016;Low, Dolan, and Khosla 2009; of single-output GPs to that of MOGPs and improving its scalability to big data through parallelization Low et al 2015), online learning (Xu et al 2014), and stochastic variational inference (Hoang, Hoang, and Low 2015).…”
Section: Discussionmentioning
confidence: 99%