2018
DOI: 10.48550/arxiv.1805.10407
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance

Neal Jean,
Sang Michael Xie,
Stefano Ermon

Abstract: Large amounts of labeled data are typically required to train deep learning models. For many real-world problems, however, acquiring additional data can be expensive or even impossible. We present semi-supervised deep kernel learning (SSDKL), a semi-supervised regression model based on minimizing predictive variance in the posterior regularization framework. SSDKL combines the hierarchical representation learning of neural networks with the probabilistic modeling capabilities of Gaussian processes. By leveragi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…Semi-supervised regression was a research area before neural networks became popular in 2012 [3]. However, only very few research articles have since included neural networks as part of their proposed semi-supervised regression pipelines, mostly combined with other machine learning algorithms [4,5]. While neural networks are not always the optimal choice, their performance ceiling is much higher due to the universal approximation theorem [6,7], the incorporation of feature extraction into the learning problem, and the efficient training through gradient descent and backpropagation.…”
Section: Introductionmentioning
confidence: 99%
“…Semi-supervised regression was a research area before neural networks became popular in 2012 [3]. However, only very few research articles have since included neural networks as part of their proposed semi-supervised regression pipelines, mostly combined with other machine learning algorithms [4,5]. While neural networks are not always the optimal choice, their performance ceiling is much higher due to the universal approximation theorem [6,7], the incorporation of feature extraction into the learning problem, and the efficient training through gradient descent and backpropagation.…”
Section: Introductionmentioning
confidence: 99%
“…Semi-supervised regression was a research area before neural networks became popular in 2012 [13]. However, only very few research articles have since included neural networks as part of their proposed semi-supervised regression pipelines, mostly combined with other machine learning algorithms [11,14]. While neural networks are not always the optimal choice, their performance ceiling is much higher due to the universal approximation theorem [8,10], the incorporation of feature extraction into the learning problem, and the efficient training through gradient descent and backpropagation.…”
Section: Introductionmentioning
confidence: 99%