Both mentalization and empathy allow humans to understand others, through the representation of their mental states or their mood, respectively. The present review aims to explain those characteristics which are shared between empathy and the Theory of Mind. Research in neuroscience, based on naturalistic paradigms, has shown that abilities to mentalize and to empathize are associated with the activation of different neuro-cognitive circuits. As far as mirror-neuron processes are concerned, some structures (like Anterior Insula, AI; Anterior Cingulate Cortex, ACC) play a role both in the representation of one’s own affective states and in comprehension of the same affective state when experienced by others. As for mentalization, the temporal parietal junction (TPj) and temporal poles (TP), the upper posterior temporal sulcus (pSTS) and the medial prefrontal cortex (mPFC) are greatly involved: the latter appears involved in the attribution of one’s own and others’ temperaments. Interestingly, the ventral/orbital portion of the PFC (orbito-frontal cortex, OFC) is involved in subserving shared affective experience during cognitive mentalizing. This brain region represents a point of overlap, from a psycho-biological point of view, where emotional mirroring and affective cognition meet up. As for animal models, laboratory rodents can well be tested for prosocial behavior. Some examples include deliberate actions, allowing another conspecific the possibility to feed (“giving food”): this willingness can vary across donors, depending on how the recipient is perceived. Other examples include the possibility to let a trapped conspecific come out (“giving help”). The state-of-the-art knowledge about this theme can inform the programming of specific clinical interventions, based on the reinforcement of empathic and/or mentalization abilities.
This study provides new longitudinal evidence on two major types of gesture–speech combination that play different roles in children’s early language. We analysed the spontaneous production of 10 Italian children observed monthly from 10–12 to 23–25 months of age. We evaluated the extent to which the developmental trends observed in children’s early gesture–word and word–word productions can predict subsequent verbal abilities. The results indicate that “complementary” and “supplementary” gesture–speech combinations predict subsequent language development in a different manner: While the onset of “supplementary” combinations predicts the onset of two-word combinations, the use of “complementary” combinations at 12 and 18 months predicts the vocabulary and the ability to produce more words utterances at 2 years of age. Moreover, the results suggest that both “complementary” and “supplementary” crossmodal combinations are good predictive indexes of early verbal skills during the second year of age.
Children born at a very low gestational age, even those without neurosensory damages, are at risk of linguistic disorders. This longitudinal study aimed at analyzing communicative and language abilities in preterm children during their second year of life, through a standardized questionnaire, with particular attention to the communicative and language abilities that predict the first verbal skills. Our results showed that preterm children are slower than full-terms in language acquisition particularly at earlier stages of development. The differences between the two groups of children was significant only at 16 and 18 months. Preterms use more simplistic linguistic categories for longer than full-terms, with regards to lexicon composition and syntactic complexity. This different pattern could involve more qualitative, rather than quantitative, aspects of developmental processes that characterize language acquisition in preterms and full-term children.
This study explored how working memory resources contributed to reading comprehension using tasks that focused on maintenance of verbal information in the phonological store, the interaction between the central executive and the phonological store (WMI), and the storage of bound semantic content in the episodic buffer (immediate narrative memory). We analysed how performance in these tasks was related to text decoding (reading speed and accuracy), listening and reading comprehension. The participants were 62 monolingual and 36 bilingual children (mean age nine years, SD = 9 months) enrolled in the same Italian primary school. Bilingual children were born to immigrant parents and had a long history of exposure to Italian as a second language. The regression analyses showed that reading accuracy and listening comprehension were associated with reading comprehension for monolingual and bilingual children. Two working memory components—WMI and immediate narrative memory—exhibited indirect effects on reading comprehension through reading accuracy and listening comprehension, respectively. Such effects occurred only for monolingual children. We discuss the implications of such findings for text reading and comprehension in monolinguals and bilinguals.
This paper examines the nature and properties of gestural and vocal deixis in verbal languages (VL) and signed languages (SL). We focus on two classes of pointing gestures which we argue need to be distinguished: (1) prototypical ostensive printings directing an interlocutor’s visual attention towards extralinguistic objects; (2) pointings to self and to one’s own addressee expressing person reference distinctions similar to those expressed by spoken pronouns. Drawing on previous work on SL and VL, and on new evidence on the development of deictic gestures and words for demonstrative vs. person reference in hearing children, we show how the two classes of pointings we explore convey indexical relationships of different complexity, and thus need to be distinguished in order to achieve a more appropriate understanding of gestural deixis, and of its relationship with vocal and, more generally, linguistic deixis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.