2017
DOI: 10.1038/s41598-017-17314-0
|View full text |Cite
|
Sign up to set email alerts
|

Functional and spatial segregation within the inferior frontal and superior temporal cortices during listening, articulation imagery, and production of vowels

Abstract: Classical models of language localize speech perception in the left superior temporal and production in the inferior frontal cortex. Nonetheless, neuropsychological, structural and functional studies have questioned such subdivision, suggesting an interwoven organization of the speech function within these cortices. We tested whether sub-regions within frontal and temporal speech-related areas retain specific phonological representations during both perception and production. Using functional magnetic resonanc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

6
20
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 25 publications
(26 citation statements)
references
References 83 publications
6
20
0
Order By: Relevance
“…In a previous study, we sought to decode model-free information content from regions involved in vowel listening, imagery and production, and in tone perception (Rampinini et al, 2017). Using four searchlight classifiers of fMRI data, we extracted a set of regions performing above-chance classification of seven vowels or tones in each task.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…In a previous study, we sought to decode model-free information content from regions involved in vowel listening, imagery and production, and in tone perception (Rampinini et al, 2017). Using four searchlight classifiers of fMRI data, we extracted a set of regions performing above-chance classification of seven vowels or tones in each task.…”
Section: Resultsmentioning
confidence: 99%
“…Moreover, by dividing the minimum/maximum average F1 range of the vowel set into seven bins, we also selected seven pure tones (450, 840, 1370, 1850, 2150, 2500, 2900 Hz), whose frequencies in Hertz were converted first to the closest Bark scale value, and then back to Hertz: this way, pure tones were made to fall into psychophysical sensitive bands for auditory perception. Then, pure tones were generated in Audacity (©Audacity Team, 2 ; see Rampinini et al, 2017 for further details).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations