2017
DOI: 10.1109/lra.2016.2634089
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Visual Descriptor Learning for Dense Correspondence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
119
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 152 publications
(120 citation statements)
references
References 24 publications
1
119
0
Order By: Relevance
“…Our work is broadly related to methods that learn pixel embeddings invariant to certain transforms. These approaches leverage tracking to obtain correspondence labels, and learn representations invariant to viewpoint transformation [36,51] or motion [46]. Similar to self-supervised correspondence approaches, these are also limited to training using observations of the same instance, and do not generalize well across instances.…”
Section: Related Workmentioning
confidence: 99%
“…Our work is broadly related to methods that learn pixel embeddings invariant to certain transforms. These approaches leverage tracking to obtain correspondence labels, and learn representations invariant to viewpoint transformation [36,51] or motion [46]. Similar to self-supervised correspondence approaches, these are also limited to training using observations of the same instance, and do not generalize well across instances.…”
Section: Related Workmentioning
confidence: 99%
“…Unlike in previous work which trained robotic-supervised correspondence models only for static environments [7], we now would like to train correspondence models with dynamic environments. Other prior work [6] has used dynamic non-rigid reconstruction [35] to address dynamic scenes. The approach we demonstrate here instead is to correspond pixels between two camera views with images that are approximately synchronized in time, similar to the full-image-embedding training in [17], but here for pixel-to-pixel correspondence.…”
Section: Multi-view Time-synchronized Correspondence Trainingmentioning
confidence: 99%
“…We are, of course, not the first to learn dense representations on visual data. Most prior work on this topic revolve around learning correspondences across views in 2D [6,21] and 3D [31,23,22,3]. Florence et al [12] proposed dense object nets, learning dense descriptors by multi-view reconstruction and applying the descriptors to manipulation tasks.…”
Section: Related Workmentioning
confidence: 99%