1999 IEEE Third Workshop on Multimedia Signal Processing (Cat. No.99TH8451) 1999
DOI: 10.1109/mmsp.1999.793890
|View full text |Cite
|
Sign up to set email alerts
|

Lip signatures for automatic person recognition

Abstract: This paper evaluates lip features for person recognition, and compares performance with that of the acoustic signal. Recognition accuracy is found to be equivalent in the 2 domains, agreeing with the findings of Chibeluushi. The optimum dynamic window length for both acoustic and visual modalities is found to be about 100ms. Recognition performance of the upper lip is considerably better than the lower lip, achieving 15% and 35% identification error rates respectively, using a single digit test and training to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2002
2002
2014
2014

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…A frequently extracted fea-ture is a logarithm of the Fourier Transform of the voice signal in each band along with pitch, tone, cadence, and shape of the larynx (A. K. Jain et al, 1999). Accuracy of voice based biometrics systems can be increased by inclusion of visual speech (lip dynamics) (Jourlin et al, 1997;Luettin et al, 1996;Mason et al, 1999;Wark et al, 1997) and incorporation of soft behavioral biometrics such as accent (Deshpande, Chikkerur, & Govindaraju, 2005;Lin & Simske, 2004).…”
Section: Description Of Behavioral Biometricsmentioning
confidence: 99%
“…A frequently extracted fea-ture is a logarithm of the Fourier Transform of the voice signal in each band along with pitch, tone, cadence, and shape of the larynx (A. K. Jain et al, 1999). Accuracy of voice based biometrics systems can be increased by inclusion of visual speech (lip dynamics) (Jourlin et al, 1997;Luettin et al, 1996;Mason et al, 1999;Wark et al, 1997) and incorporation of soft behavioral biometrics such as accent (Deshpande, Chikkerur, & Govindaraju, 2005;Lin & Simske, 2004).…”
Section: Description Of Behavioral Biometricsmentioning
confidence: 99%
“…Lip features include the mouth opening or closing, skin around the lips, mouth width, upper/lower lip width, lip opening height/width, and distance between horizontal lip line and upper lip (Broun et al, 2002;Shipilova, 2006). Typically, lip dynamics are utilised as a part of a multimodal biometric system, usually combined with speaker recognition-based authentication (Jourlin et al, 1997;Luettin et al, 1996;Mason et al, 1999;Wark et al, 1997), but standalone usage is also possible (Mok et al, 2004).…”
Section: Behavioural Biometricsmentioning
confidence: 99%
“…A frequently extracted feature is a logarithm of the Fourier transform of the voice signal in each band along with pitch, tone, cadence, and shape of the larynx (Jain et al, 1999). Accuracy of voice based biometrics systems can be increased by inclusion of visual speech (lip dynamics) (Jourlin et al, 1997;Luettin et al, 1996;Mason et al, 1999;Wark et al, 1997) and incorporation of soft behavioural biometrics such as accent (Deshpande et al, 2005;Lin and Simske, 2004). Recently some research has been aimed at expanding the developed technology to singer recognition for the purposes of music database management (Tsai and Wang, 2006) and to laughter recognition.…”
mentioning
confidence: 99%
“…Another method of extracting dynamic features is to first extract static features, then to derive dynamic features from these static features by taking derivative over a window. In [11], the radial magnitudes are measured from points around the circumference of the lip, stepping pixel by pixel to the mid point of the principal diagonal. The final lip signature is then derived by taking the DCT of the radial magnitudes.…”
Section: Dynamicmentioning
confidence: 99%