2023
DOI: 10.1523/jneurosci.1731-22.2023
|View full text |Cite
|
Sign up to set email alerts
|

Neurophysiological Evidence for Semantic Processing of Irrelevant Speech and Own-Name Detection in a Virtual Café

Abstract: The well-known “cocktail party effect” refers to incidental detection of salient words, such as one's own name, in supposedly-unattended speech. However, empirical investigation of the prevalence of this phenomenon and the underlying mechanisms has been limited to extremely artificial contexts and has yielded conflicting results. We introduce a novel empirical approach for revisiting this effect under highly ecological conditions, by immersing participants in a multisensory virtual café and using realistic sti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 116 publications
1
5
0
Order By: Relevance
“…Not only was performance worse and speech tracking reduced in the intermittent noise condition, but in this condition we also found an increase in skin conductance, characterized by more frequent phasic responses and a trend toward higher tonic responses, relative to the quiet and continuous noise conditions. These physiological responses reflect heightened activation of the sympathetic nervous system, which is known to respond to salient stimuli, both task-relevant and in the background (Brown et al, 2023; Dawson et al, 1989; Filion et al, 1991; Mueller-Pfeiffer et al, 2014), and is also engaged in conditions requiring high listening effort (Borghini & Hazan, 2018; Mackersie & Cones, 2011; et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Not only was performance worse and speech tracking reduced in the intermittent noise condition, but in this condition we also found an increase in skin conductance, characterized by more frequent phasic responses and a trend toward higher tonic responses, relative to the quiet and continuous noise conditions. These physiological responses reflect heightened activation of the sympathetic nervous system, which is known to respond to salient stimuli, both task-relevant and in the background (Brown et al, 2023; Dawson et al, 1989; Filion et al, 1991; Mueller-Pfeiffer et al, 2014), and is also engaged in conditions requiring high listening effort (Borghini & Hazan, 2018; Mackersie & Cones, 2011; et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…The ability to move your eyes freely, is thought to be a central mechanism in controlling and focusing attention (Craighero & Rizzolatti, 2005). Past VR studies have shown that when given this freedom, some participants perform frequent spontaneous gaze-shifts around the environment, while others keep their eyes fixed on the target speaker (Brown et al, 2023; Shavit-Cohen & Zion Golumbic, 2019). When processing audiovisual speech in noise, past studies have shown people tend to focus their gaze more intently on speaker’s face, and particularly on the mouth area, ostensibly to utilize visual cues to overcome the acoustic degradation of the speech (Buchan et al, 2008; Król, 2018; Šabić et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Using a linear approach for speech-tracking analysis and auditory attention decoding (Crosse et al, 2016; Zion Golumbic, Ding, et al, 2013), we investigated how the neural representation of the concurrent speech stimuli was modulated by their status as target/non-target, and whether this was affected once they switched roles. Moreover, our choice to use natural audiovisual speech and to present audio in a spatially-realistic free-field manner, increases the ecological validity of this study, relative to past studies that mostly use audio-only speech presented through headphones (Brown et al, 2023; Freyman et al, 2001; Keidser et al, 2020; Ross et al, 2007; Shavit-Cohen & Zion Golumbic, 2019; Tye-Mmurray et al, 2016; Uhrig et al, 2022).…”
Section: Introductionmentioning
confidence: 99%