2014
DOI: 10.1007/s00221-014-3986-x
|View full text |Cite
|
Sign up to set email alerts
|

The overlap of neural selectivity between faces and words: evidences from the N170 adaptation effect

Abstract: Faces and words both evoke an N170, a strong electrophysiological response that is often used as a marker for the early stages of expert pattern perception. We examine the relationship of neural selectivity between faces and words by using a novel application of cross-category adaptation to the N170. We report a strong asymmetry between N170 adaptation induced by faces and by words. This is the first electrophysiological result showing that neural selectivity to faces encompasses neural selectivity to words an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
18
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(22 citation statements)
references
References 45 publications
3
18
1
Order By: Relevance
“…A direct tying of these peak frequencies to transient response latencies via the superposition model would predict latencies of 1000 msec for transient ERPs to words and 250 msec for face responses. These predicted latencies are clearly inconsistent with the common 150–170 msec ERP latency for both stimulus categories (Cao et al, 2014; Pegna, Khateb, Michel, & Landis, 2004; Rossion et al, 2003). This finding shows that under a different set of measurement conditions, the temporal aspects of the signal in word and face selective cortex can be substantially different despite previous reports noting similarities between the ERP waveform.…”
Section: Discussioncontrasting
confidence: 58%
See 1 more Smart Citation
“…A direct tying of these peak frequencies to transient response latencies via the superposition model would predict latencies of 1000 msec for transient ERPs to words and 250 msec for face responses. These predicted latencies are clearly inconsistent with the common 150–170 msec ERP latency for both stimulus categories (Cao et al, 2014; Pegna, Khateb, Michel, & Landis, 2004; Rossion et al, 2003). This finding shows that under a different set of measurement conditions, the temporal aspects of the signal in word and face selective cortex can be substantially different despite previous reports noting similarities between the ERP waveform.…”
Section: Discussioncontrasting
confidence: 58%
“…From our measurements we determined that temporal acuity, peak response frequency and delay, each differ for text and face images. These differences in temporal tuning profiles might be surprising considering: (a) word and face selective ERPs have been described to have a similar time delay (Cao et al, 2014; Pegna, Khateb, Michel, & Landis, 2004; Rossion et al, 2003); (b) word and face selective regions are immediately adjacent on the ventral surface of the cortex (Dehaene et al, 2010; Wandell et al, 2012; Yeatman et al, 2013); (c) word and face selective regions have been hypothesized to share a common neuronal architecture (Dehaene & Cohen, 2007; Dehaene et al, 2010). …”
Section: Discussionmentioning
confidence: 99%
“…This effect was replicated in an fMRI study on children [27] and ERP studies on children and adults [28,29]. Moreover, there were some overlaps of neural selectivity between faces and words in early perceptual processing [30]. Additionally, face recognition impairments were more severe following bilateral than unilateral lesions [31] and a left occipital arteriovenous malformation resulted in both pure alexia and prosopagnosia [32].…”
Section: Introductionmentioning
confidence: 88%
“…A group of channels over the left occipitotemporal regions (O1, 65, T5; channel 65 in the middle between O1 and T5) and right occipitotemporal regions (O2, 90, T6; channel 90 in the middle between O2 and T6) was analyzed where the N170 components were maximal [30]. In order to reduce the number of levels in the statistical analyses, these peak amplitudes and latencies were then averaged across the three channels chosen for each hemisphere.…”
Section: Electroencephalogram Recording and Data Analysismentioning
confidence: 99%
“…Moreover, our results are consistent with previous observations in the visual domain. Recently, Cao et al (2014Cao et al ( , 2015 used a cross-category adaptation to examine the relationship between faces and words. They analyzed the electrophysiological responses (N170) to these stimuli separately, as a function of their preceding stimulus, belonging to either the same category or not.…”
Section: Category-specific Adaptation Effectsmentioning
confidence: 99%