The present work investigates the relationship between semantic and prosodic (metric) processing in spoken language under 2 attentional conditions (semantic and metric tasks) by analyzing both behavioral and event-related potential (ERP) data. Participants listened to short sentences ending in semantically and/or metrically congruous or incongruous trisyllabic words. In the metric task, ERP data showed that metrically incongruous words elicited both larger early negative and late positive components than metrically congruous words, thereby demonstrating the online processing of the metric structure of words. Moreover, in the semantic task, metrically incongruous words also elicited an early negative component with similar latency and scalp distribution as the classical N400 component. This finding highlights the automaticity of metrical structure processing. Moreover, it demonstrates that violations of a word's metric structure may hinder lexical access and word comprehension. This interpretation is supported by the behavioral data showing that participants made more errors for semantically congruous but metrically incongruous words when they were attending to the semantic aspects of the sentence. Finally, the finding of larger N400 components to semantically incongruous than congruous words, in both the semantic and metric tasks, suggests that the N400 component reflects automatic aspects of semantic processing.
In this paper, we focused on the identification of the perceptual properties of impacted materials to provide an intuitive control of an impact sound synthesizer. To investigate such properties, impact sounds from everyday life objects, made of different materials (wood, metal and glass), were recorded and analyzed. These sounds were synthesized using an analysis-synthesis technique and tuned to the same chroma. Sound continua were created to simulate progressive transitions between materials. Sounds from these continua were then used in a categorization experiment to determine sound categories representative of each material (called typical sounds). We also examined changes in electrical brain activity (using event related potentials (ERPs) method) associated with the categorization of these typical sounds. Moreover, acoustic analysis was conducted to investigate the relevance of acoustic descriptors known to be relevant for both timbre perception and material identification. Both acoustic and electrophysiological data confirmed the importance of damping and highlighted the relevance of spectral content for material perception. Based on these findings, controls for damping and spectral shaping were tested in synthesis applications. A global control strategy, with a threelayer architecture, was proposed for the synthesizer allowing the user to intuitively navigate in a "material space" and defining impact sounds directly from the material label. A formal perceptual evaluation was finally conducted to validate the proposed control strategy.
This study investigates the human ability to perceive biological movements through friction sounds produced by drawings and, furthermore, the ability to recover drawn shapes from the friction sounds generated. In a first experiment, friction sounds, real-time synthesized and modulated by the velocity profile of the drawing gesture, revealed that subjects associated a biological movement to those sounds whose timbre variations were generated by velocity profiles following the 1/3 power law. This finding demonstrates that sounds can adequately inform about human movements if their acoustic characteristics are in accordance with the kinematic rule governing actual movements. Further investigations of our ability to recognize drawn shapes were carried out in 2 association tasks in which both recorded and synthesized sounds had to be associated to both distinct and similar visual shapes. Results revealed that, for both synthesized and recorded sounds, subjects made correct associations for distinct shapes, although some confusion was observed for similar shapes. The comparisons made between recorded and synthesized sounds lead to conclude that the timbre variations induced by the velocity profile enabled the shape recognition. The results are discussed in the context of the ecological and ideomotor frameworks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.