2016
DOI: 10.1037/xhp0000209
|View full text |Cite
|
Sign up to set email alerts
|

Please say what this word is—Vowel-extrinsic normalization in the sensorimotor control of speech.

Abstract: The extent to which the adaptive nature of speech perception influences the acoustic targets underlying speech production is not well understood. For example, listeners can rapidly accommodate to talker-dependent phonetic properties – a process known as vowel-extrinsic normalization – without altering their speech output. Recent evidence, however, shows that reinforcement-based learning in vowel perception alters the processing of speech auditory feedback, impacting sensorimotor control during vowel production… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
11
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 57 publications
1
11
0
Order By: Relevance
“…The neural locus for where these error signals in perception exert their influence on production differs between models; however, the underlying mechanism (error-driven learning) is common across models. Therefore, error signals generated by a discrepancy between expected and experienced acoustic input affecting the articulatory system seems a plausible explanation for the current results-a conclusion also reached by recent research into perceptual influences on speech production (Bourguignon et al, 2016). Error-driven supervised learning provides a potential mechanism by which dimensionbased statistical learning may occur in the empirical perceptual findings thus far, and the current results suggest that error signals generated in the perceptual system influence productions as well-a finding compatible with models of speech production.…”
Section: Discussionsupporting
confidence: 81%
See 1 more Smart Citation
“…The neural locus for where these error signals in perception exert their influence on production differs between models; however, the underlying mechanism (error-driven learning) is common across models. Therefore, error signals generated by a discrepancy between expected and experienced acoustic input affecting the articulatory system seems a plausible explanation for the current results-a conclusion also reached by recent research into perceptual influences on speech production (Bourguignon et al, 2016). Error-driven supervised learning provides a potential mechanism by which dimensionbased statistical learning may occur in the empirical perceptual findings thus far, and the current results suggest that error signals generated in the perceptual system influence productions as well-a finding compatible with models of speech production.…”
Section: Discussionsupporting
confidence: 81%
“…The direction of the shift mirrors the direction of adaptation (Lametti, Rochet-Capellan, Neufeld, Shiller, & Ostry, 2014;Shiller, Sato, Gracco, & Baum, 2009), and the effects appear to be reciprocal. Perceptual shifts evoked by carrier phrases or explicit feedback-based perceptual training on another talker's speech can also impact the degree of sensorimotor adaptation to distorted vocal feedback from one's own voice (Bourguignon, Baum, & Shiller, 2016;Lametti, Krol, Shiller, & Ostry, 2014;Shiller, Lametti, & Ostry, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…It is conceivable, therefore, to argue that the SMS may also use global constraints in addition to local constraints to more accurately produce vowels or phonemes, in general. In fact, emerging evidence suggests that the SMS may rely on perceptual knowledge of vowel categories to estimate errors in auditory feedback (Niziolek and Guenther, 2013; Bourguignon et al, 2014, 2016; Lametti et al, 2014a). For example, in a seminal study, Niziolek and Guenther (2013) showed that real-time auditory feedback perturbations (shifts in formant frequencies) of productions that were closer to the edge of the vowel category elicited larger compensatory responses relative to identical perturbations of productions closer to the center of the vowel (far from the edge of the vowel boundary).…”
Section: Introductionmentioning
confidence: 99%
“…These studies demonstrated that a brief period of reinforcement-based perceptual training (altering the perceptual representation of the target vowel) prior to a speech production task with altered auditory feedback modified the amount of speech motor adaptation (Lametti et al 2014; Shiller & Rochon, 2014). Similarly, passive exposure to speech signals with different spectral properties has been shown to rapidly alter auditory-perceptual processing during a speech adaptation task (Bourguignon et al, 2016). These studies therefore provide evidence that plasticity in the auditory system can have a marked effect on the outcome of speech motor learning, even if the perceptual change occurs in the absence of speech movements.…”
Section: Introductionmentioning
confidence: 99%
“…Here, in addition to the traditional visual cue to speak, a second experimental condition was added in which the cue to speak involved the auditory presentation of the target syllable (corresponding to a recording of the subject’s own unaltered speech). Based on the above-mentioned studies on auditory plasticity and speech motor adaptation (Lametti et al 2014; Shiller & Rochon, 2014; Bourguignon et al, 2016), we hypothesized that auditorily-presented speech prompts might provide more precise auditory targets to guide greater sensorimotor adaptation responses. In other words, the external auditory cues would help to enhance and stabilize the internal auditory sensory prediction of the target syllable.…”
Section: Introductionmentioning
confidence: 99%