The present study examined whether 6-month-old infants could transfer amodal information (i.e. independently of sensory modalities) from emotional voices to emotional faces. Thus, sequences of successive emotional stimuli (voice or face from one sensory modality -auditory- to another sensory modality -visual-), corresponding to a cross-modal transfer, were displayed to 24 infants. Each sequence presented an emotional (angry or happy) or neutral voice, uniquely, followed by the simultaneous presentation of two static emotional faces (angry or happy, congruous or incongruous with the emotional voice). Eye movements in response to the visual stimuli were recorded with an eye-tracker. First, results suggested no difference in infants’ looking time to happy or angry face after listening to the neutral voice or the angry voice. Nevertheless, after listening to the happy voice, infants looked longer at the incongruent angry face (the mouth area in particular) than the congruent happy face. These results revealed that a cross-modal transfer (from auditory to visual modalities) is possible for 6-month-old infants only after the presentation of a happy voice, suggesting that they recognize this emotion amodally.
The present study examines the visual recognition of action simulations by finger gestures (ASFGs) produced by sighted and blind individuals. In ASFGs, fingers simulate legs to represent actions such as jumping, spinning, climbing, etc. The question is to determine whether the common motor experience of one’s own body is sufficient to produce adequate ASFGs or whether the possibility to see gestures from others are also necessary to do it. Three experiments were carried out to address this question. Experiment 1 examined in 74 sighted adults the recognition of 18 types of ASFGs produced by 20 blindfolded sighted adults. Results showed that rates of correct recognition were globally very high, but varied with the type of ASFG. Experiment 2 studied in 91 other sighted adults the recognition of ASFGs produced by 10 early blind and 7 late blind adults. Results also showed a high level of recognition with a similar order of recognizability by type of ASFG. However, ASFGs produced by early blind individuals were more poorly recognized than those produced by late blind individuals. In order to match data of recognition obtained with the form that gestures are produced by individuals, two independant judges evaluated prototypical and atypical attributes of ASFG produced by blindfolded sighted, early blind and late blind individuals in Experiment 3. Results revealed the occurrence of more atypical attributes in ASFG produced by blind individuals: their ASFGs transpose more body movements from a character-viewpoint in less agreement with visual rules. The practical interest of the study relates to the relevance of including ASFGs as a new exploratory procedure in tactile devices which are more apt to convey action concepts to blind users/readers.
Being exposed to a female voice had a negative impact on preterm infants' tactile sensory learning, regardless of its intensity.
The present study examined the evolution of emotional cross-modal transfer throughout childhood compared to adulthood, using an experimental design first used with infants. We studied whether verbal children spontaneously look at emotional faces differently depending on the emotional voices previously heard, demonstrating a real intrinsic understanding of the emotion. Thus, sequences of emotional (happy vs. angry) cross-modal transfer were individually presented to 5-, 8-and 10-year-old children and adults. Spontaneous ocular behaviors toward the visual stimuli were recorded by eye-tracking. Results of the emotional cross-modal transfer suggested that participants looked spontaneously longer at the congruent face. However, this result was significantly revealed only as of age 8 with the happy voice and as of age 10 with the angry voice. Thus, the modulation of behavior indicators related to the control of the ability to extract amodal emotional information and spontaneously match the congruent information seems to increase with age and depends on the specific emotion presented.
Tactile books for blind children generally contain tactile illustrations referring to a visual world that can be difficult to understand. This study investigates an innovative way to present content to be explored by touch. Following embodied approaches and evidence about the advantages of manipulations in tactile processing, we examined 3D miniatures that children explored using their middle and index fingers to simulate leg movements. This “Action simulations by finger gestures–ASFG” procedure has a symbolic relevance in the context of blindness. The aim of the present study was to show how the ASFG procedure facilitates the identification of objects by blind and sighted children. Experiment 1 examined the identification of 3D miniatures of action objects (e.g. the toboggan, trampoline) by 8 early blind and 15 sighted children, aged 7 to 12, who explored with the ASFG procedure. Results revealed that objects were very well identified by the two groups of children. Results confirmed hypotheses that ASFG procedures are relevant in the identification process regardless of the visual status of subjects. Experiment (control) 2 studied identification of tactile pictures of same action objects by 8 different early blind and 15 sighted children, aged 7 to 12. Results confirmed that almost all objects obtained lower recognition scores in tactile pictures than in 3D miniatures by both groups and showed surprisingly higher scores in blind children than in sighted children. Taken together, our study provides evidence of the contribution of sensorimotor simulation in the identification of objects by touch and brings innovative solutions in book design for blind people. Moreover, it means that only the ASFG procedure has a very inclusive potential to be relevant for a larger number of subjects, regardless of their visual skills.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.