2016
DOI: 10.3758/s13414-016-1141-4
|View full text |Cite
|
Sign up to set email alerts
|

Nesting in perception of affordances for stepping and leaping

Abstract: Perception of affordances for a given behavior typically reflects the task-specific action capabilities of the perceiver. However, many experiments have shown a discrepancy between the perceptual and behavioral boundaries for a given behavior. One possibility for such a discrepancy is that the context of many experimental tasks transformed what is typically a dynamic perception-action task into an analytical or reflective judgment. We investigated this hypothesis with respect to perception of maximum stepping … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 42 publications
0
11
0
Order By: Relevance
“…This finding is interesting because response times were shorter overall for the tree pose than for the other poses. This could suggest that perception of affordances is more accurate when judgements are made without taking too much time to dwell on the task at hand (see Heft, 1993; Wagman, Bai, & Smith, 2016). One could argue that this is due to participant’s underestimation of abilities based on being in an unstable standing position.…”
Section: Discussionmentioning
confidence: 99%
“…This finding is interesting because response times were shorter overall for the tree pose than for the other poses. This could suggest that perception of affordances is more accurate when judgements are made without taking too much time to dwell on the task at hand (see Heft, 1993; Wagman, Bai, & Smith, 2016). One could argue that this is due to participant’s underestimation of abilities based on being in an unstable standing position.…”
Section: Discussionmentioning
confidence: 99%
“…The abstractness of the responses generated by participants (e.g., "display" and "gift") especially in the visual condition (e.g., "reliever" and "model") likely reflects the abstractness of the task. In this sense, the lesson of this study seems to be that perception of affordances of objects, like perception of affordances of surface layout, is less wellconstrained in the absence of such goals and task constraints (Doyon et al, 2015;Heft, 1993;Wagman, Bai, et al, 2016). Intention constrains perception, and perception has an intentional character (Turvey, 2019).…”
Section: Discussionmentioning
confidence: 92%
“…For example, perception of whether an object can be reached depends on both why and how the reaching task will be performed (Wagman, Cialdella & Stoffregen, 2019). Moreover, research has also shown that perception of affordances for a given behavior more closely reflects the action capabilities for that behavior when that affordance is nested within the context of a superseding goal than when it is not (Doyon et al., 2015; Heft, 1993; Wagman, Bai, et al., 2016). There were no such superseding goals in the present experiment.…”
Section: Discussionmentioning
confidence: 99%
“…Since the introduction of the concept, multiple studies have examined whether humans are capable of accurately perceiving affordances. These studies have shown that participants are capable of perceiving the maximum stair-riser height they can climb (e.g., Mark, 1987;Warren, 1984), the maximum distance they can reach (e.g., Carello et al, 1989;Cole et al, 2013;Heft, 1993), the minimal width of the aperture they can pass through (Warren & Whang, 1987) and the maximum distance they can step (Chemero et al, 2003;Cole et al, 2013;Day et al, 2015;Wagman et al, 2016) and jump (Cole et al, 2013;Day et al, 2015;Wagman et al, 2016). Yet sometimes participants underestimated or overestimated their action capabilities in judgment tasks (e.g., Carello et al, 1989;Cole et al, 2013, but see Heft, 1993;Wagman et al, 2016).…”
Section: Introductionmentioning
confidence: 99%