2016
DOI: 10.1109/tpami.2015.2500224
|View full text |Cite
|
Sign up to set email alerts
|

Factors of Transferability for a Generic ConvNet Representation

Abstract: Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

9
252
0
1

Year Published

2018
2018
2019
2019

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 280 publications
(274 citation statements)
references
References 46 publications
(48 reference statements)
9
252
0
1
Order By: Relevance
“…Experiments confirm that the top layers may exhibit lower generalization ability than the layer before it. For example, for AlexNet pre-trained on ImageNet, it is shown that FC6, FC7, and FC8 are in descending order regarding retrieval accuracy [130]. It is also shown in [10], [134] that the pool5 feature of AlexNet and VGGNet is even superior to FC6 when proper encoding techniques are employed.…”
Section: Retrieval Using Pre-trained Cnn Modelsmentioning
confidence: 96%
See 4 more Smart Citations
“…Experiments confirm that the top layers may exhibit lower generalization ability than the layer before it. For example, for AlexNet pre-trained on ImageNet, it is shown that FC6, FC7, and FC8 are in descending order regarding retrieval accuracy [130]. It is also shown in [10], [134] that the pool5 feature of AlexNet and VGGNet is even superior to FC6 when proper encoding techniques are employed.…”
Section: Retrieval Using Pre-trained Cnn Modelsmentioning
confidence: 96%
“…A hybrid dataset combining the Places-205 and the ImageNet datasets has also been used for pre-training [129]. The resulting HybridNet is evaluated in [125], [126], [130], [131] for instance retrieval.…”
Section: Retrieval Using Pre-trained Cnn Modelsmentioning
confidence: 99%
See 3 more Smart Citations