2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00926
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Under Privileged Information Using Heteroscedastic Dropout

Abstract: Unlike machines, humans learn through rapid, abstract model-building. The role of a teacher is not simply to hammer home right or wrong answers, but rather to provide intuitive comments, comparisons, and explanations to a pupil. This is what the Learning Under Privileged Information (LUPI) paradigm endeavors to model by utilizing extra knowledge only available during training. We propose a new LUPI algorithm specifically designed for Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
54
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 68 publications
(55 citation statements)
references
References 37 publications
1
54
0
Order By: Relevance
“…This guidance led to significantly improved results which is in line with related work using LUPI for image related tasks (Gu et al . (2020); Lambert et al . (2018); Chen et al .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This guidance led to significantly improved results which is in line with related work using LUPI for image related tasks (Gu et al . (2020); Lambert et al . (2018); Chen et al .…”
Section: Resultsmentioning
confidence: 99%
“…The way in which this is done in deep learning is to develop separate network streams, one for the main task and one based on the privileged information, and then to incorporate some means of guiding the main model with insights from the privileged information stream (Gu et al . (2020); Lambert et al . (2018); Chen et al .…”
Section: Introductionmentioning
confidence: 99%
“…Hyperbolic tangent as the nonlinear activation function is employed in the experiments. To alleviate the problem of overfitting, dropout [26] is applied and set to 0.5. e number of layers is adjusted to 7, and the number of epochs is set to 100. For deep learning algorithms, CNN, RNN, and LSTM models have 7 layers.…”
Section: Experiments Resultsmentioning
confidence: 99%
“…It is followed by the implementation of RNN with 2 Simple RNN layers each with 32 RNN cells followed by 2 time distribute dense layers for 5 class classification. Dropout [26] rate on the fully connected layers is set to 0.5. Tangent activation function is used at the last layer.…”
Section: Experiments Resultsmentioning
confidence: 99%
See 1 more Smart Citation