2004
DOI: 10.1162/089892904322926692
|View full text |Cite
|
Sign up to set email alerts
|

Processing Objects at Different Levels of Specificity

Abstract: Abstract& How objects are represented and processed in the brain is a central topic in cognitive neuroscience. Previous studies have shown that knowledge of objects is represented in a featurebased distributed neural system primarily involving occipital and temporal cortical regions. Research with nonhuman primates suggest that these features are structured in a hierarchical system with posterior neurons in the inferior temporal cortex representing simple features and anterior neurons in the perirhinal cortex … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

22
196
1
1

Year Published

2005
2005
2018
2018

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 246 publications
(220 citation statements)
references
References 49 publications
22
196
1
1
Order By: Relevance
“…Some of these features refer to visual features (e.g., has eyes, has tail), whereas others refer to nonvisual properties (e.g., growls, hunts). Recent fMRI studies have demonstrated greater perirhinal cortex activation during the visual processing of living compared with nonliving things, consistent with a role of this structure in complex visual integration processes necessary to discriminate and identify more visually complex living objects (7,8). However, the category effect in the crossmodal Fig.…”
Section: Discussionmentioning
confidence: 68%
See 3 more Smart Citations
“…Some of these features refer to visual features (e.g., has eyes, has tail), whereas others refer to nonvisual properties (e.g., growls, hunts). Recent fMRI studies have demonstrated greater perirhinal cortex activation during the visual processing of living compared with nonliving things, consistent with a role of this structure in complex visual integration processes necessary to discriminate and identify more visually complex living objects (7,8). However, the category effect in the crossmodal Fig.…”
Section: Discussionmentioning
confidence: 68%
“…Lerner et al (6) demonstrated that the sensitivity of ventral occipitotemporal regions to the scrambling of car images increased significantly from posterior (V1, V2, V3, V4͞V8) to more anteriorly situated sites (lateral occipital sulcus and posterior fusiform gyrus; lateral occipital complex), with scrambled images predicting activity in more posterior sites and intact images predicting activity in the more anterior regions. The hypothesized role of anteromedial structures in complex visual discriminations was confirmed in another series of fMRI experiments and in neuropsychological studies with brain-damaged patients (7,8). In the fMRI studies, tasks that did not require complex feature conjunctions (e.g., distinguishing living from nonliving things, which can be accomplished on the basis of general featural differences, such as curvature) only activated posterior temporal and occipital regions, whereas tasks that required complex conjunctions of features (e.g., the combination of features necessary to distinguish between highly similar objects, such as a lion and a tiger) additionally activated the anteromedial temporal lobe, including the perirhinal cortex.…”
mentioning
confidence: 71%
See 2 more Smart Citations
“…Previous studies have compared object-naming activation to reading [Bookheimer et al, 1995;Moore and Price, 1999b], color naming [Price et al, 1996], verbal fluency [Etard et al, 2000], semantic categorization [Tyler et al, 2004], or a range of baselines [Murtha et al, 1999]. Other studies have investigated how the activation pattern changes for overt and covert naming [Zelkowicz et al, 1998], for object category [Chao et al, 1999[Chao et al, , 2002Chao and Martin, 2000;Damasio et al, 1996;Grabowski et al, 1998;Kawashima et al, 2001;Martin et al, 1996;Moore and Price, 1999a;Smith et al, 2001], by scanning modality [Votaw et al, 1999], across languages [Vingerhoets et al, 2003], with name agreement [Kan and Thompson-Schill, 2004], with gender of subjects [Grabowski et al, 2003], and during object learning [van Turennout et al, 2000[van Turennout et al, , 2003.…”
Section: Introductionmentioning
confidence: 99%