2018
DOI: 10.1016/j.bandl.2018.01.008
|View full text |Cite
|
Sign up to set email alerts
|

Auditory prediction during speaking and listening

Abstract: In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory predictio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
17
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 66 publications
1
17
0
Order By: Relevance
“…The second option consists of using custom-designed signal processing software with a digital signal processing board (e.g., Houde & Jordan, 2002 ; Purcell & Munhall, 2006 ; Villacorta et al, 2007 ) or a commercially available consumer-grade audio interface ( Cai et al, 2008 ; Tourville et al, 2013 ). In the latter category—custom signal processing software used with an audio interface—the MATLAB-based application Audapter (a MEX interface built from C++ source code) has gained great popularity due to its capability to implement many different real-time perturbations (e.g., Abur et al, 2018 ; Ballard et al, 2018 ; Cai et al, 2014 , 2012 , 2008 , 2010 ; Daliri & Dittman, 2019 ; Daliri et al, 2018 ; Franken, Acheson, et al, 2018 ; Franken, Eisner, et al, 2018 ; Klein et al, 2018 ; Lametti et al, 2018 ; Reilly & Pettibone, 2017 ; Sares et al, 2018 ; Sato & Shiller, 2018 ; Stepp et al, 2017 ).…”
mentioning
confidence: 99%
“…The second option consists of using custom-designed signal processing software with a digital signal processing board (e.g., Houde & Jordan, 2002 ; Purcell & Munhall, 2006 ; Villacorta et al, 2007 ) or a commercially available consumer-grade audio interface ( Cai et al, 2008 ; Tourville et al, 2013 ). In the latter category—custom signal processing software used with an audio interface—the MATLAB-based application Audapter (a MEX interface built from C++ source code) has gained great popularity due to its capability to implement many different real-time perturbations (e.g., Abur et al, 2018 ; Ballard et al, 2018 ; Cai et al, 2014 , 2012 , 2008 , 2010 ; Daliri & Dittman, 2019 ; Daliri et al, 2018 ; Franken, Acheson, et al, 2018 ; Franken, Eisner, et al, 2018 ; Klein et al, 2018 ; Lametti et al, 2018 ; Reilly & Pettibone, 2017 ; Sares et al, 2018 ; Sato & Shiller, 2018 ; Stepp et al, 2017 ).…”
mentioning
confidence: 99%
“…No effect of the sensory modality was found on the magnitude of adaptation but linguistic prompts ("head" as a spoken or written word) were found to induce more adaptation than non-linguistic prompts (a cross or a tune). Similarly Sato and Shiller, (2018) found no difference in the magnitude of adaptation between visual and auditory modalities. In addition, Caudrelier et al (2018) investigated whether naming a picture or reading a word aloud would make a difference in adaptation and in transfer.…”
Section: Surface Effects and Speakers' Characteristicsmentioning
confidence: 88%
“…Sengupta and Nasir (2016) then found that by late training, power in specific frequency bands during speech planning and speech production was related to whether speakers were adapting to the auditory perturbation or not. Finally, Sato and Shiller (2018) analyzed event-related potentials (ERPs) during adaptation to an increase of F1. They observed that electro-cortical potentials at certain temporal windows (N1, P2) amplitude mirrors adaptation, as larger adaptation magnitude correlated with smaller N1/P2 amplitude.…”
Section: Neural Basis Of Speech Motor Learningmentioning
confidence: 99%
“…Many functional imaging studies have compared vocalisation to AC playback of own voice recordings (e.g. with human participants: Numminen, Curio, Neuloh, Jousmüki & Hari, 1998; Numminen & Curio, 1999; Curio, Neuloh, Numminen, Jousmäki, & Hari, 2000; Ford et al, 2001; Houde, Nagarajan, Sekihara, & Merzenich, 2002; Ford & Mathalon, 2004; Ventura, Nagarajan, & Houde, 2009; Greenlee et al, 2011; Sato & Shiller, 2018; or using animal models: Müller-Preuss & Ploog, 1981; Eliades & Wang, 2017; Eliades & Tsunada, 2018). A consistent finding in such experiments is that parts of temporal cortex which respond to sound have reduced activity in the vocalisation condition compared to the playback condition.…”
Section: Discussionmentioning
confidence: 99%