2021
DOI: 10.1101/2021.04.19.440487
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Spatial alignment between faces and voices improves selective attention to audio-visual speech

Abstract: The ability to see a talker's face has long been known to improve speech intelligibility in noise. This perceptual benefit depends on approximate temporal alignment between the auditory and visual speech components. However, the practical role that cross-modal spatial alignment plays in integrating audio-visual (AV) speech remains unresolved, particularly when competing talkers are present. In a series of online experiments, we investigated the importance of spatial alignment between corresponding faces and vo… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 68 publications
(68 reference statements)
0
1
0
Order By: Relevance
“…According to the command issued by the upper computer, FPGA will receive the data of student terminal and teacher terminal through the data transmission interface. Disassociate the received data into function data and audio data [13,14]. Identify the function data code and make corresponding processing.…”
Section: Description Of the Main Controlmentioning
confidence: 99%
“…According to the command issued by the upper computer, FPGA will receive the data of student terminal and teacher terminal through the data transmission interface. Disassociate the received data into function data and audio data [13,14]. Identify the function data code and make corresponding processing.…”
Section: Description Of the Main Controlmentioning
confidence: 99%