It has been widely assumed that computing how a scene looks from another perspective (level-2 perspective taking, PT) is an effortful process, as opposed to the automatic capacity of tracking visual access to objects (level-1 PT). Recently, adults have been found to compute both forms of visual perspectives in a quick but context-sensitive way, indicating that the two functions share more features than previously assumed. However, the developmental literature still shows the dissociation between automatic level-1 and effortful level-2 PT. In the current paper, we report an experiment showing that in a minimally social situation, participating in a number verification task with an adult confederate, eight- to 9.5-year-old children demonstrate similar online level-2 PT capacities as adults. Future studies need to address whether online PT shows selectivity in children as well and develop paradigms that are adequate to test preschoolers' online level-2 PT abilities. Statement of Contribution What is already known on this subject? Adults can access how objects appear to others (level-2 perspective) spontaneously and online Online level-1, but not level-2 perspective taking (PT) has been documented in school-aged children What the present study adds? Eight- to 9.5-year-olds performed a number verification task with a confederate who had the same task Children showed similar perspective interference as adults, indicating spontaneous level-2 PT Not only agent-object relations but also object appearances are computed online by eight- to 9.5-year-olds.
The present study investigated 3-year-old children’s learning processes about object functions. We built on children’s tendency to commit scale errors with tools to explore whether they would selectively endorse object functions from a linguistic in-group over an out-group model. Participants (n = 37) were presented with different object sets, and a model speaking either in their native or a foreign language demonstrated how to use the presented tools. In the test phase, children received the object sets with two modifications: the original tool was replaced by one that was too big to achieve the goal but was otherwise identical, and another tool was added to the set that looked different but was appropriately scaled for goal attainment. Children in the Native language condition were significantly more likely to commit scale errors – that is, choose the over-sized tool – than children in the Foreign language condition (48 vs. 30%). We propose that these results provide insight into the characteristics of human-specific learning processes by showing that children are more likely to generalize object functions to a category of artifacts following a demonstration from an in-group member.
Task co-representation has been proposed to rely on the motor brain areas' capacity to represent others' action plans similarly to one's own. The joint memory (JM) effect suggests that working in parallel with others influences the depth of incidental encoding: Other-relevant items are better encoded than non-task-relevant items. Using this paradigm, we investigated whether task co-representation could also emerge for non-motor tasks. In Experiment 1, we found enhanced recall performance to stimuli relevant to the co-actor also when the participants' task required non-motor responses (counting the target words) instead of key-presses. This suggests that the JM effect did not depend on simulating the co-actor's motor responses. In Experiment 2, direct visual access to the co-actor and his actions was found to be unnecessary to evoke the JM effect in case of the non-motor, but not in case of the motor task. Prior knowledge of the co-actor's target category is sufficient to evoke deeper incidental encoding. Overall, these findings indicate that the capacity of task co-representation extends beyond the realm of motor tasks: Simulating the other's motor actions is not necessary in this process.
Previous research has shown that human infants and young children are sensitive to the boundaries of certain social groups, which supports the idea that the capacity to represent social categories constitutes a fundamental characteristic of the human cognitive system. However, the function this capacity serves is still debated. We propose that during social categorization the human mind aims at mapping out social groups defined by a certain set of shared knowledge. An eye-tracking paradigm was designed to test whether two-year-old children differentially associate conventional versus non-conventional tool use with language-use, reflecting an organization of information that is induced by cues of shared knowledge. Children first watched videos depicting a male model perform goal-directed actions either in a conventional or in a non-conventional way. In the test phase children were presented with photographs taken of the model and of a similarly aged unfamiliar person while listening to a foreign (Experiment 1) or a native language (Experiment 2) text. Upon hearing the foreign utterance children looked at the model first if he had been seen to act in an unconventional way during familiarization. In contrast, children looked at the other person if the model had performed conventional tool use actions. No such differences were found in case of the native language. The results suggest that children take the conventionality of behavior into account in forming representations about a person, and they generalize to other qualities of the person based on this information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.