2018
DOI: 10.48550/arxiv.1806.06034
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Toybox Dataset of Egocentric Visual Object Transformations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…ToyBox. Motivated by recreating the naturalistic patterns of embodied visual experience, Wang et al [2018] created an egocentric video dataset called Toybox that contains egocentric (i.e., first-person perspective) videos of common household objects and toys being manually manipulated to undergo structured transformations, such as rotation, translation, and zooming.…”
Section: Related Work: Related Benchmarksmentioning
confidence: 99%
“…ToyBox. Motivated by recreating the naturalistic patterns of embodied visual experience, Wang et al [2018] created an egocentric video dataset called Toybox that contains egocentric (i.e., first-person perspective) videos of common household objects and toys being manually manipulated to undergo structured transformations, such as rotation, translation, and zooming.…”
Section: Related Work: Related Benchmarksmentioning
confidence: 99%
“…We follow the work [11] to come up with the ordering, training and testing data splits. The Toybox dataset [34] contains videos of toy objects from 12 classes. We used a subset of the full dataset containing 348 toy objects with 10 instances per object, each containing a spatial transformation of that object's pose.…”
Section: Datasetsmentioning
confidence: 99%
“…We evaluated our method under two typical stream learning protocols (Figure 1), incremental class iid and incremental class instance, across three benchmark datasets, CORe50 [21], Toybox [34] and iLab [3]. The performance of HAMN is comparable to or even better than state-of-the-art (SOTA) methods.…”
Section: Introductionmentioning
confidence: 99%
“…To mimic patterns of visual experience infants get when playing with objects, the Toybox dataset (Wang et al 2018) was curated in our lab. The dataset contains egocentric videos of several toy objects from a small number of categories being manipulated in different ways.…”
Section: Introductionmentioning
confidence: 99%