2021
DOI: 10.48550/arxiv.2112.03905
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ViewCLR: Learning Self-supervised Video Representation for Unseen Viewpoints

Abstract: Learning self-supervised video representation predominantly focuses on discriminating instances generated from simple data augmentation schemes. However, the learned representation often fails to generalize over unseen camera viewpoints. To this end, we propose ViewCLR, that learns self-supervised video representation invariant to camera viewpoint changes. We introduce a view-generator that can be considered as a learnable augmentation for any selfsupervised pre-text tasks, to generate latent viewpoint represe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 43 publications
0
2
0
Order By: Relevance
“…This pretraining framework is similar to how self-supervision has been benefiting supervised Computer Vision tasks ( [6,8,10,15,31,37,57]): pretrain with self-supervised losses, and then finetune with the downstream task loss. Motivated by them, in this section, we design and benchmark the two-stage pretraining framework, replacing the joint learning framework used in CURL and SAC+AE.…”
Section: Observation On Pretraining Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…This pretraining framework is similar to how self-supervision has been benefiting supervised Computer Vision tasks ( [6,8,10,15,31,37,57]): pretrain with self-supervised losses, and then finetune with the downstream task loss. Motivated by them, in this section, we design and benchmark the two-stage pretraining framework, replacing the joint learning framework used in CURL and SAC+AE.…”
Section: Observation On Pretraining Frameworkmentioning
confidence: 99%
“…Several recent work studied such challenges from various directions, including: (1) Inspired by the great success of self-supervised learning (SSL) with images and videos (e.g., [5,6,8,10,14,15,17,21,31,32,37,40,52,54,55,61,71]), some RL methods [1,42,46,59,63,69,81,88] take advantage of self-supervised learning. This is typically done by applying both self-supervised loss and reinforcement learning loss in one batch.…”
Section: Introductionmentioning
confidence: 99%