2018
DOI: 10.48550/arxiv.1802.02568
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

VISER: Visual Self-Regularization

Abstract: In this work, we propose the use of large set of unlabeled images as a source of regularization data for learning robust visual representation. Given a visual model trained by a labeled dataset in a supervised fashion, we augment our training samples by incorporating large number of unlabeled data and train a semi-supervised model. We demonstrate that our proposed learning approach leverages an abundance of unlabeled images and boosts the visual recognition performance which alleviates the need to rely on larg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…Our Semantic Net is built and trained based on [24]. We pre-train our Semantic Net on the MS-COCO object categories [34] with a weakly supervised object localization setup similar to [24].…”
Section: Network Architecturementioning
confidence: 99%
See 3 more Smart Citations
“…Our Semantic Net is built and trained based on [24]. We pre-train our Semantic Net on the MS-COCO object categories [34] with a weakly supervised object localization setup similar to [24].…”
Section: Network Architecturementioning
confidence: 99%
“…Our Semantic Net is built and trained based on [24]. We pre-train our Semantic Net on the MS-COCO object categories [34] with a weakly supervised object localization setup similar to [24]. We use the penultimate layer of the fully convolutional neural network of [24] to encode visual semantics in the spatial image space.…”
Section: Network Architecturementioning
confidence: 99%
See 2 more Smart Citations