Primates interpret conspecific behaviour as goal-directed and expect others to achieve goals by the most efficient means possible. While this teleological stance is prominent in evolutionary and developmental theories of social cognition, little is known about the underlying mechanisms. In predictive models of social cognition, a perceptual prediction of an ideal efficient trajectory would be generated from prior knowledge against which the observed action is evaluated, distorting the perception of unexpected inefficient actions. To test this, participants observed an actor reach for an object with a straight or arched trajectory on a touch screen. The actions were made efficient or inefficient by adding or removing an obstructing object. The action disappeared mid-trajectory and participants touched the last seen screen position of the hand. Judgements of inefficient actions were biased towards the efficient prediction (straight trajectories upward to avoid the obstruction, arched trajectories downward towards the target). These corrections increased when the obstruction's presence/absence was explicitly acknowledged, and when the efficient trajectory was explicitly predicted. Additional supplementary experiments demonstrated that these biases occur during ongoing visual perception and/or immediately after motion offset. The teleological stance is at least partly perceptual, providing an ideal reference trajectory against which actual behaviour is evaluated.
Other peoples' (imagined) visual perspectives are represented perceptually in a similar way to our own, and can drive bottom-up processes in the same way as own perceptual input (Ward, Ganis & Bach, 2019). Here we test directly whether visual perspective taking is driven by where another person is looking, or whether these perceptual simulations represent their position in space more generally. Across two experiments, we asked participants to identify whether alphanumeric characters, presented at one of eight possible orientations away from upright, were presented normally, or in their mirror-inverted form (e.g. "R" vs. "Я"). In some scenes, a person would appear sitting to the left or the right of the participant. We manipulated either between-trials (Experiment 1) or between-subjects (Experiment 2), the gaze-direction of the inserted person, such that they either (1) looked towards the to-bejudged item, (2) averted their gaze away from the participant, or (3) gazed out towards the participant (Exp. 2 only). In the absence of another person, we replicated the well-established mental rotation effect, where recognition of items becomes slower the more items are oriented away from upright (e.g. Shepard and Meltzer, 1971). Crucially, in both experiments and in all conditions, this response pattern changed when another person was inserted into the scene. People spontaneously took the perspective of the other person and made faster judgements about the presented items in their presence if the characters were oriented towards upright to them. The gaze direction of this other person did not influence these effects. We propose that visual perspective taking is therefore a general spatial-navigational ability, allowing us to calculate more easily how a scene would (in principle) look from another position in space, and that such calculations reflect the spatial location of another person, but not their gaze.
Humans interpret others’ behaviour as intentional and expect them to take the most energy-efficient path to achieve their goals. Recent studies show that these expectations of efficient action take the form of a prediction of an ideal “reference” trajectory, against which observed actions are evaluated, distorting their perceptual representation towards this expected path. Here we tested whether these predictions depend upon the implied intentionality of the stimulus. Participants saw videos of an actor reaching either efficiently (straight towards an object or arched over an obstacle) or inefficiently (straight towards obstacle or arched over empty space). The hand disappeared mid-trajectory and participants reported the last seen position on a touch-screen. As in prior research, judgments of inefficient actions were biased toward efficiency expectations (straight trajectories upwards to avoid obstacles, arched trajectories downward towards goals). In two further experimental groups, intentionality cues were removed by replacing the hand with a non-agentive ball (group 2), and by removing the action’s biological motion profile (group 3). Removing these cues substantially reduced perceptual biases. Our results therefore confirm that the perception of others’ actions is guided by expectations of efficient actions, which are triggered by the perception of semantic and motion cues to intentionality.
Humans interpret others’ behaviour as intentional and goal-directed, expecting others to take the most energy-efficient path to achieve their goals. Recent studies have shown that these expectations of efficient action provide a perceptual prediction of an ideal efficient trajectory, against which the observed action is evaluated, resulting in a distorted perceptual representation of unexpected inefficient actions. Here we show that these predictions rely on the inferred intentionality of the stimulus. Participants observed an actor reach for an object with a straight or arched trajectory. The actions were made efficient or inefficient by adding or removing an obstructing object. The action disappeared mid-trajectory and participants reported the last seen position of the hand on a touch screen. Replicating previous research, judgments of inefficient actions were biased toward the efficient prediction (straight trajectories upward to avoid the obstruction, arched trajectories downward towards the target) In two further experiments, we removed intentional cues by replacing the hand with a non-agentive ball (Exp 2), and by removing the biological profile of the motion by depicting it move at a constant speed (Exp 3). Perceptual biases were substantially reduced when intentional cues were removed. Predictions of efficient action are at least partially perceptually represented and influence perceptual judgments of others actions, biasing them towards these expectations. These predictions emerge from attributions of intentionality to the observed actor, triggered by the perception of agency and kinematics that follow biological motion profiles.
Predictive processing accounts of social perception argue that action observation is a predictive process, in which inferences about others’ goals are tested against the perceptual input, inducing a subtle perceptual confirmation bias that distorts observed action kinematics toward the inferred goals. Here we test whether such biases are induced even when goals are not explicitly given but have to be derived from the unfolding action kinematics. In 2 experiments, participants briefly saw an actor reach ambiguously toward a large object and a small object, with either a whole-hand power grip or an index-finger and thumb precision grip. During its course, the hand suddenly disappeared, and participants reported its last seen position on a touch-screen. As predicted, judgments were consistently biased toward apparent action targets, such that power grips were perceived closer to large objects and precision grips closer to small objects, even if the reach kinematics were identical. Strikingly, these biases were independent of participants’ explicit goal judgments. They were of equal size when action goals had to be explicitly derived in each trial (Experiment 1) or not (Experiment 2) and, across trials and across participants, explicit judgments and perceptual biases were uncorrelated. This provides evidence, for the first time, that people make online adjustments of observed actions based on the match between hand grip and object goals, distorting their perceptual representation toward implied goals. These distortions may not reflect high-level goal assumptions, but emerge from relatively low-level processing of kinematic features within the perceptual system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.