2022
DOI: 10.48550/arxiv.2202.00187
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Reference Priors: What is the best way to pretrain a model?

Abstract: What is the best way to exploit extra data-be it unlabeled data from the same task, or labeled data from a related task-to learn a given task? This paper formalizes the question using the theory of reference priors. Reference priors are objective, uninformative Bayesian priors that maximize the mutual information between the task and the weights of the model. Such priors enable the task to maximally affect the Bayesian posterior, e.g., reference priors depend upon the number of samples available for learning t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 17 publications
(23 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?