2020
DOI: 10.1007/978-3-030-44051-0_19
|View full text |Cite
|
Sign up to set email alerts
|

Meta-learning Priors for Efficient Online Bayesian Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
72
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 53 publications
(72 citation statements)
references
References 16 publications
0
72
0
Order By: Relevance
“…O'Connell et al [55] apply this same method to learn neural network features for nonlinear mechanical systems. Harrison et al [28,27] more generally back-propagate through a Bayesian regression solution to train a Bayesian prior dynamics model with nonlinear features. Nagabandi et al [50] use a maximum likelihood meta-objective, and gradient descent on a multi-step likelihood objective as the base-learner.…”
Section: Meta-learningmentioning
confidence: 99%
See 1 more Smart Citation
“…O'Connell et al [55] apply this same method to learn neural network features for nonlinear mechanical systems. Harrison et al [28,27] more generally back-propagate through a Bayesian regression solution to train a Bayesian prior dynamics model with nonlinear features. Nagabandi et al [50] use a maximum likelihood meta-objective, and gradient descent on a multi-step likelihood objective as the base-learner.…”
Section: Meta-learningmentioning
confidence: 99%
“…Adaptive Control via Meta-Ridge Regression (ACMRR): This baseline is a slightly modified 1 version of the approach taken by O'Connell et al [55], which applies the work on using ridge regression as a base-learner from Bertinetto et al [11], Lee et al [42] and Harrison et al [28] to learn the parametric features y(q,q; θ y ). Specifically, for a given trajectory T j and these features, the last layer A is specified as the best ridge regression fit to some subset of points in T j .…”
Section: A Baselinesmentioning
confidence: 99%
“…Integrating uncertainty has typically been achieved by learning probabilistic dynamics models from collected statetransition data in an episodic setting, where the model is updated in between trajectory-length system executions [7,22,35,28]. A variety of modelling representations have been explored, including Gaussian processes [8], neural network ensembles [7], Bayesian regression, and meta-learning [16]. Alternatively, the authors in [28,26] estimate posterior distributions of physical parameters for black-box simulators, given real-world observations.…”
Section: Related Workmentioning
confidence: 99%
“…A classical way of meta-learning with a stochastic process would be the Bayesian last layer (BLL) method (Weber et al 2018;Harrison, Sharma, and Pavone 2018), which fits into the former way of defining the task. Using the fact that the last layers of many neural networks are usually (generalized) linear models, a straightforward way of building a flexible stochastic process is to set a Gaussian prior on the weights of the last layer.…”
Section: Bayesian Last Layer (Bll)mentioning
confidence: 99%
“…In this paper, we delve further into the structural similarity between the ANPs and traditional stochastic processes, namely the Bayesian last layer (BLL) (Calandra et al 2016;Weber et al 2018;Harrison, Sharma, and Pavone 2018), as ANPs are designed to mimic BLL's behavior efficiently. It turns out that the self-attention layers in an ANP are not expressive enough, and simple cases where the underlying functions lie in the space spanned by a feature extractor, which can be exactly modeled with BLL, might not be efficiently learned.…”
Section: Introductionmentioning
confidence: 99%