2019
DOI: 10.1167/19.6.6
|View full text |Cite
|
Sign up to set email alerts
|

Skeletal representations of shape in human vision: Evidence for a pruned medial axis model

Abstract: A representation of shape that is low dimensional and stable across minor disruptions is critical for object recognition. Computer vision research suggests that such a representation can be supported by the medial axis—a computational model for extracting a shape's internal skeleton. However, few studies have shown evidence of medial axis processing in humans, and even fewer have examined how the medial axis is extracted in the presence of disruptive contours. Here, we tested whether human skeletal representat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
21
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 33 publications
(22 citation statements)
references
References 71 publications
(147 reference statements)
1
21
0
Order By: Relevance
“…Indeed, incorporating a skeletal model into off-the-shelf convolutional neural networks (CNNs) significantly improves their performance on visual perception tasks (Rezanejad et al, 2019). Similarly, behavioral research with humans has shown that participants extract the skeleton of 2D shapes (Firestone & Scholl, 2015;Kovács, Fehér, & Julesz, 1998;Psotka, 1978), even in the presence of border perturbations and illusory contours (Ayzenberg, Chen, Yousif, & Lourenco, 2019). Other research has shown that skeletal models are predictive of human object recognition (Destler, Singh, & Feldman, 2019;Lowet, Firestone, & Scholl, 2018;Wilder, Feldman, & Singh, 2011), even when controlling for other models of vision .…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, incorporating a skeletal model into off-the-shelf convolutional neural networks (CNNs) significantly improves their performance on visual perception tasks (Rezanejad et al, 2019). Similarly, behavioral research with humans has shown that participants extract the skeleton of 2D shapes (Firestone & Scholl, 2015;Kovács, Fehér, & Julesz, 1998;Psotka, 1978), even in the presence of border perturbations and illusory contours (Ayzenberg, Chen, Yousif, & Lourenco, 2019). Other research has shown that skeletal models are predictive of human object recognition (Destler, Singh, & Feldman, 2019;Lowet, Firestone, & Scholl, 2018;Wilder, Feldman, & Singh, 2011), even when controlling for other models of vision .…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, shape skeletons may offer a concrete formalization for the oft poorly defined concept of global shape. Skeletal descriptions exhibit many properties ascribed to global shape percepts such as relative invariance to local contour variations 22,35 . Moreover, there exist many methods by which to compare skeletal structures, such as by their hierarchical organization 61 or using distance metrics (as used here) 27 , thereby allowing for a quantitative description of shape similarity.…”
Section: Discussionmentioning
confidence: 99%
“…Accumulating evidence suggests that the skeletal structure of objects is extracted by the primate visual system during shape perception. In particular, behavioral studies have shown that human participants extract the skeletons of different 2D shapes 3034 and those skeletal structures remain relatively stable across border disruptions resulting from perturbations or illusory contours 35 . Increasingly, studies have shown that skeletal structures may be represented in three dimensions (3D) within an object-centered reference frame.…”
Section: Introductionmentioning
confidence: 99%
“…Evidence from human adults demonstrates that participants spontaneously extract the skeletons of 2D shapes, even in the presence of incomplete contours (Ayzenberg, Chen, Yousif, & Lourenco, 2019;Firestone & Scholl, 2014). Moreover, a 3D skeletal model predicted participants' object similarity and category judgments, even when controlling for other models of vision .…”
Section: Introductionmentioning
confidence: 99%