2018
DOI: 10.1101/442194
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

How face perception unfolds over time

Abstract: Within a fraction of a second of viewing a face, we have already determined its gender, age and identity. A full understanding of this remarkable feat will require a characterization of the computational steps it entails, along with the representations extracted at each. Here we used magnetencephalography to ask which properties of a face are extracted when, and how early in processing these computations are affected by face familiarity. Subjects viewed images of familiar and unfamiliar faces varying orthogona… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

9
114
2
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 58 publications
(126 citation statements)
references
References 68 publications
9
114
2
1
Order By: Relevance
“…The N170 response (Bentin et al, 1996) is a strongly face-selective univariate response arising around 170 ms after image onset. However, recent decoding studies have shown that many aspects of face information are represented earlier than 170 ms. For example, age, gender and identity are all decodable around 100 ms (Dobs et al, 2018). Even emotion properties like expression (100 ms (Dima et al, 2018)) and valence and arousal (150 ms (Grootswagers et al, 2017)) have been shown to come online quickly.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The N170 response (Bentin et al, 1996) is a strongly face-selective univariate response arising around 170 ms after image onset. However, recent decoding studies have shown that many aspects of face information are represented earlier than 170 ms. For example, age, gender and identity are all decodable around 100 ms (Dobs et al, 2018). Even emotion properties like expression (100 ms (Dima et al, 2018)) and valence and arousal (150 ms (Grootswagers et al, 2017)) have been shown to come online quickly.…”
Section: Discussionmentioning
confidence: 99%
“…Second, visual recognition in primates is fast, occurring within 200ms of image onset, as expected of a largely feedforward process. These fast latencies have been demonstrated for face (Bentin et al, 1996;Dobs et al, 2018), scene (Cichy et al, 2016a;Greene and Hansen, 2018), and object (Carlson et al, 2013a;Isik et al, 2014;Yamins et al, 2014) recognition. In contrast, some visual information cannot be computed from bottom-up visual information alone.…”
Section: Introductionmentioning
confidence: 85%
“…The results of our study show that face identity is not solely driven by low level visual properties, as captured by the HMAX model, or face shape as captured in PCA space. A recent study indicated that gender and age information is also encoded early in the MEG response to faces, especially familiar faces, and overlaps with representations of identity (Dobs et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…In a socially interconnected world, the ability to perceive and understand complex information about other people is critical. Humans rely heavily on faces to provide social information, and we can make rapid judgments about the age, sex, trustworthiness, and identity of another within a few hundred milliseconds of viewing a face (Dobs et al, 2019;Todorov et al, 2009;Young and Burton, 2018). Recently, it has been shown that information about the social connections and network positions of those we know is represented in a distributed set of brain regions, including inferior parietal, superior temporal, and medial prefrontal cortices (Parkinson et al, 2017;Morelli et al, 2018).…”
Section: Temporal Dynamics Of the Neural Representations Of Social Rementioning
confidence: 99%
“…Recently, however, a new wave of face research has emerged aimed at elucidating how the brain extracts information along different face dimensions concurrently (12)(13)(14)(15). In contrast to second-order comparisons of face functions (i.e., contrasts across different tasks/face encounters), this approach investigates the different levels of categorisation reflected in the exact same neural response elicited by a given face encounter, often by applying multivariate pattern analysis (MVPA) techniques to high temporal resolution electro/magneto-encephalographic data (EEG, MEG) (16).…”
Section: Main Textmentioning
confidence: 99%