2018
DOI: 10.1016/j.neuroimage.2018.08.011
|View full text |Cite
|
Sign up to set email alerts
|

Functional connectivity within the voice perception network and its behavioural relevance

Abstract: Recognizing who is speaking is a cognitive ability characterized by considerable individual differences, which could relate to the inter-individual variability observed in voice-elicited BOLD activity. Since voice perception is sustained by a complex brain network involving temporal voice areas (TVAs) and, even if less consistently, extra-temporal regions such as frontal cortices, functional connectivity (FC) during an fMRI voice localizer (passive listening of voices vs non-voices) has been computed within tw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
42
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
4
2
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 39 publications
(46 citation statements)
references
References 76 publications
4
42
0
Order By: Relevance
“…Interestingly, in many cases we observed a maximal activity not in the depth pit but on one of the PPs adjacent to it (figure 6 d.), which were already identified using the depth profile method on both sides of the STAP (Leroy et al 2015;. The fact that voice-related activity can be clustered into three areas from the anterior STG to posterior STS (Pernet et al 2014) and that these areas are functionally inter-connected with each other's (Aglieri et al 2018) opens numerous questions on their link with PPs and therefore with the U-shape structural connectivity.…”
Section: Functional Implicationssupporting
confidence: 51%
“…Interestingly, in many cases we observed a maximal activity not in the depth pit but on one of the PPs adjacent to it (figure 6 d.), which were already identified using the depth profile method on both sides of the STAP (Leroy et al 2015;. The fact that voice-related activity can be clustered into three areas from the anterior STG to posterior STS (Pernet et al 2014) and that these areas are functionally inter-connected with each other's (Aglieri et al 2018) opens numerous questions on their link with PPs and therefore with the U-shape structural connectivity.…”
Section: Functional Implicationssupporting
confidence: 51%
“…(B) When using the concatenated latent representations in the MDAE (relu, sigmoid), several larger regions are detected: regions tagged 3 and 4 are along the superior temporal gyrus bilaterally, region 1 is located in the fundus of the right superior temporal sulcus, regions 2 and 5 are in the inferior frontal gyrus. All these regions perfectly match the neuroscientific literature[29]-[31].…”
supporting
confidence: 85%
“…We now examine the neuroscientific relevance of our results. Lacking a ground truth, we compare our results qualitatively with state of the art knowledge extracted from the neuroscientific literature [29]- [31]: knowing the task performed by the subject (i.e the passive listening of vocal sounds), we can expect to see a very focal network of brain regions located bilaterally in the temporal lobe (along the superior temporal gyrus and sulcus [29], [30]), as well as regions in the frontal lobe (in the pre-central and inferior frontal gyri [31]). For this, we present on Fig.…”
Section: F Neuroscientific Relevance Of the Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Voice-sensitive areas, generally referred to as 'temporal voice areas' (TVA), have been highlighted along the upper, superior part of the temporal cortex 2 . Since then, great effort has been put into better characterizing these TVA, with a specific focus on their spatial compartmentalization into functional subparts [3][4][5] . Repetitive transcranial magnetic stimulations over the right mid TVA lead to persistent voice detection impairment in a simple voice/non-voice discrimination task 6 and a rather large body of literature is aligned with the crucial role of the TVA in voice perception and processing 3,[7][8][9] .…”
Section: Introductionmentioning
confidence: 99%