Handbook of Research on Face Processing 1989
DOI: 10.1016/b978-0-444-87143-5.50019-6
|View full text |Cite
|
Sign up to set email alerts
|

Lips, Teeth, and the Benefits of Lipreading

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
30
1

Year Published

1996
1996
2013
2013

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 59 publications
(33 citation statements)
references
References 12 publications
2
30
1
Order By: Relevance
“…Several studies have shown the importance of the oral cavity (Smeele, Hahnlen, Stevens, Kuhl, & Meltzoff, 1995) and being able to see the teeth (McGrath, 1985;Summerfield, MacLeod, McGrath, & Brooke, 1989).…”
Section: Limitations Of the Present Studymentioning
confidence: 99%
“…Several studies have shown the importance of the oral cavity (Smeele, Hahnlen, Stevens, Kuhl, & Meltzoff, 1995) and being able to see the teeth (McGrath, 1985;Summerfield, MacLeod, McGrath, & Brooke, 1989).…”
Section: Limitations Of the Present Studymentioning
confidence: 99%
“…Several studies have helped to determine this facial information by leaving visible only an individual facial feature (see, e.g., Benoit, Guiard-Marigny, Le Goff, & Adjoudani, 1996;Berger, Garner, & Sudman, 1971;Cohen, Walker, & Massaro, 1996;Greenberg & Bode, 1968;IJsseldijk, 1992;Larr, 1959;Marassa & Lansing, 1995;McGrath, 1985;Montgomery & Jackson, 1983;Stone, 1957;Summerfield, 1979;Summerfield, MacLeod, McGrath, & Brooke, 1989;Summerfield & McGrath, 1984). For example, Summerfield (1979) presented displays in which the talker's lips were coated with ultraviolet paint so that only the lips could be seen.…”
Section: Introductionmentioning
confidence: 99%
“…Both mouthshape (and the visibility of mouth parts) and mouth movement (the dynamics of mouth actions, including rate of speech) play their part. This insight is confirmed by a range of experimental findings 18,20,22 and by findings in applied telematics which show that speechreading accuracy for audiovisual inputs with auditory dynamic noise falls off as framerate (temporal resolution) of the display of the speaker's face drops from about 30Hz to [8][9][10][11][12]24 In addition, seen rate of speech can be readily discriminated and can directly affect the identification of a heard speech token 5,6 But how does this work? Is one process subsumed in the other?…”
Section: Introductionmentioning
confidence: 70%