2022
DOI: 10.1016/j.cub.2022.07.047
|View full text |Cite
|
Sign up to set email alerts
|

Interaction of bottom-up and top-down neural mechanisms in spatial multi-talker speech perception

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(18 citation statements)
references
References 77 publications
5
13
0
Order By: Relevance
“…For example, previous research has suggested that multisensory integration can happen automatically and independent of attention (i.e., preattentively) based on shared temporal properties (Van der Burg et al, 2011;Atilgan et al, 2018). Whereas we might expect a greater influence of attention on multisensory integration at longer latencies given previous research on cocktail party attention (Power et al, 2012;Patel et al, 2022).…”
Section: Single-lag Analysismentioning
confidence: 82%
See 1 more Smart Citation
“…For example, previous research has suggested that multisensory integration can happen automatically and independent of attention (i.e., preattentively) based on shared temporal properties (Van der Burg et al, 2011;Atilgan et al, 2018). Whereas we might expect a greater influence of attention on multisensory integration at longer latencies given previous research on cocktail party attention (Power et al, 2012;Patel et al, 2022).…”
Section: Single-lag Analysismentioning
confidence: 82%
“…This interpretation aligns with other findings presented in the paper. With the single-lag analysis, we identified three major time pointsconsistent with early visual and auditory responses (Schroeder et al, 2008;Karthik et al, 2022), and attentional effects (Power et al, 2012;Patel et al, 2022) -where gaze was associated with stronger speech tracking driven by non-linear multisensory effects. Intriguingly, in a much earlier epoch (−438 ms onward), speech tracking but not multisensory interactions was affected by gaze.…”
Section: Discussionmentioning
confidence: 99%
“…At random moments in between trials, participants were asked to report the location of the attended talker and the last sentence uttered by the attended talker. Participants successfully performed the behavioral task (see 24 for a detailed analysis of the behavioral results).…”
Section: Resultsmentioning
confidence: 99%
“…This model not only separates the speech of moving talkers but also allows listeners to accurately track their locations, an aspect crucial for realistic AAD applications. An essential contribution of our study is that incorporating real-time talker trajectories estimated by the speech separation algorithm in addition to spectro-temporal information can improve AAD accuracy 19,[35][36][37] . Further research is needed to distinguish listener motion-induced from talker motion-induced acoustic change and how it could be encoded differently in the human auditory cortex 38,39 .…”
Section: Discussionmentioning
confidence: 99%