2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00773
|View full text |Cite
|
Sign up to set email alerts
|

Objectron: A Large Scale Dataset of Object-Centric Videos in the Wild with Pose Annotations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
65
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 110 publications
(74 citation statements)
references
References 15 publications
0
65
0
Order By: Relevance
“…Our method can also be trained on real-data that consists of image pairs of the same object which vary in their viewpoints. To this end, we use the recently proposed Objectron dataset [1], and the Freiburg cars dataset [48]. For Objectron we train on the chair category, as it is present in ShapeNet and contains sufficiently diverse high-quality images in contrast to the other categories where images are blurry or there are too few videos.…”
Section: Other Dataset Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our method can also be trained on real-data that consists of image pairs of the same object which vary in their viewpoints. To this end, we use the recently proposed Objectron dataset [1], and the Freiburg cars dataset [48]. For Objectron we train on the chair category, as it is present in ShapeNet and contains sufficiently diverse high-quality images in contrast to the other categories where images are blurry or there are too few videos.…”
Section: Other Dataset Resultsmentioning
confidence: 99%
“…Alternatively, coarse viewpoint estimation can be obtained without manual annotations using structure from motion algorithms on videos [48,40]. Ground truth pose annotations are challenging to acquire, and recent benchmarks still require human intervention in order to set the coordinate system for each instance and to correct automatic pose errors [1]. 3D-aware representations.…”
Section: Related Workmentioning
confidence: 99%
“…Datasets. We performed experiments on two datasets, Objectron [2] and the ImageNetVid and ImageNetVid-Robust datasets described in Sect. 1.1.…”
Section: Methodsmentioning
confidence: 99%
“…1.1. Objectron [2] contains short video clips of scenes with 9 classes: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes. We split it into a training and a validation set, with an equal number of videos of each class in each split.…”
Section: Methodsmentioning
confidence: 99%
“…Objectron. Recently, Google released the Objectron dataset (Ahmadyan et al, 2021), which is composed of object centric video clips capturing nine different objects categories in indoor and outdoor scenarios. The dataset consists of 14,819 annotated video clips containing over four million annotated images.…”
Section: Indoor Datasetsmentioning
confidence: 99%