1994
DOI: 10.1044/jshr.3705.1195
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Phonetic Context on Audio-Visual Intelligibility of French

Abstract: Bimodal perception leads to better speech understanding than auditory perception alone. We evaluated the overall benefit of lip-reading on natural utterances of French produced by a single speaker. Eighteen French subjects with good audition and vision were administered a closed set identification test of VCVCV nonsense words consisting of three vowels [i, a, y] and six consonants [b, v, z, 3, r , l]. Stimuli were presented under both auditory and audio-visual conditions with white nois… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
43
0
2

Year Published

2000
2000
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 125 publications
(50 citation statements)
references
References 24 publications
5
43
0
2
Order By: Relevance
“…Merging information from different senses confers distinct behavioral advantages, enabling faster and more accurate discrimination than with unimodal stimuli (Hershenson, 1962;Morrell, 1968;Stein et al, 1989;Perrott et al, 1990;Hughes et al, 1994;Frens et al, 1995), especially when the signals are degraded (Sumby and Pollack, 1954;MacLeod and Summerfield, 1987;Perrott et al, 1991;Benoit et al, 1994). To realize these advantages, the brain continually coordinates sensory inputs across the audiovisual (Calvert et al, 2000;Grant and Seitz, 2000;Shams et al, 2002;Callan et al, 2003), visual-tactile (Banati et al, 2000;Macaluso et al, 2000;Stein et al, 2001), and audiosomatic (Schulz et al, 2003) domains and combines them into coherent perceptions.…”
Section: Introductionmentioning
confidence: 99%
“…Merging information from different senses confers distinct behavioral advantages, enabling faster and more accurate discrimination than with unimodal stimuli (Hershenson, 1962;Morrell, 1968;Stein et al, 1989;Perrott et al, 1990;Hughes et al, 1994;Frens et al, 1995), especially when the signals are degraded (Sumby and Pollack, 1954;MacLeod and Summerfield, 1987;Perrott et al, 1991;Benoit et al, 1994). To realize these advantages, the brain continually coordinates sensory inputs across the audiovisual (Calvert et al, 2000;Grant and Seitz, 2000;Shams et al, 2002;Callan et al, 2003), visual-tactile (Banati et al, 2000;Macaluso et al, 2000;Stein et al, 2001), and audiosomatic (Schulz et al, 2003) domains and combines them into coherent perceptions.…”
Section: Introductionmentioning
confidence: 99%
“…This is consistent with the finding that lip protrusion is the most visible lip feature (Benoît et al, 1994).…”
Section: Conclusion: Lip Feature Criteria For the Detection Of Prosodsupporting
confidence: 92%
“…Second, the two modalities produce "synergy. ": Performance of audio-visual speech perception can outperform those of acoustic-only and visual-only perception for diverse noise conditions (Benoît et al, 1994).…”
Section: Bimodal Nature Of Speech Perceptionmentioning
confidence: 99%