2022
DOI: 10.1101/2022.07.22.501123
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in human brain and behavior

Abstract: Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here we present THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 134 publications
(225 reference statements)
0
5
0
Order By: Relevance
“…Recent open, large-scale condition-rich fMRI datasets are now available (e.g. NSD dataset, Allen et al, 2022; THINGS dataset, Hebart et al, 2019, 2022) which can enable the development of cortical topographic metrics beyond these macro- and meso-scale signatures probed for here. Thus, going forward, there is clear work to do towards mapping these computational models more directly to the cortex (c.f.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent open, large-scale condition-rich fMRI datasets are now available (e.g. NSD dataset, Allen et al, 2022; THINGS dataset, Hebart et al, 2019, 2022) which can enable the development of cortical topographic metrics beyond these macro- and meso-scale signatures probed for here. Thus, going forward, there is clear work to do towards mapping these computational models more directly to the cortex (c.f.…”
Section: Discussionmentioning
confidence: 99%
“…Recent open, large-scale condition-rich fMRI datasets are now available (e.g. NSD dataset ( 104 ); THINGS dataset ( 105, 106 )) which can enable the development of cortical topographic metrics beyond these macro- and meso-scale signatures probed for here. Thus, going forward, there is clear work to do towards mapping these computational models more directly to the cortex ( 49 ), and assessing how they succeed and fail at capturing the systematic response structure to thousands of natural images across the cortical surface.…”
Section: Discussionmentioning
confidence: 99%
“…Increasing the variation within each category serves to dampen the low-level confounds that might otherwise dominate measurements of neural representations . Studying object-specific neural responses with such large stimulus sets has the advantage of more closely mimicking natural vision, as well as allowing more fine-grained analyses of visual features, categories, and semantics (Chang et al 2019, Hebart et al 2023. For example, models of image statistics and object category can be compared with the neural data to assess how much each model accounts for the variance in neural information (Grootswagers et al 2019a, Moerel et al 2022a).…”
Section: Decoding Traps For New Playersmentioning
confidence: 99%
“…However, while empirical behavioral benchmarks exist for visual tasks involving known object categories (1820), the field currently lacks a publicly-available set of benchmarks for comparing models to humans as they learn novel object categories, making it difficult to gauge progress in the field.…”
Section: Introductionmentioning
confidence: 99%
“…While behavioral benchmarks exist for visual tasks involving known object categories (e.g. Rajalingham et al(2018 ); Geirhos et al (2021 ); Hebart et al (2022 )), the field currently lacks a publicly-available set of benchmarks for comparing models to humans for learning tasks involving novel objects, making it diffcult to gauge progress in the field.…”
Section: Introductionmentioning
confidence: 99%