2016
DOI: 10.11591/ijeecs.v4.i2.pp439-446
|View full text |Cite
|
Sign up to set email alerts
|

Generic and Robust Method for Head Pose Estimation

Abstract: Head pose estimation has fascinated the research community due to its application in facial motion capture, human-computer interaction and video conferencing. It is a pre-requisite to gaze tracking, face recognition, and facial expression analysis. In this paper, we present a generic and robust method for model-based global 2D head pose estimation from single RGB Image. In our approach we use of the one part the Gabor filters to conceive a robust pose descriptor to illumination and facial expression variations… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…As shown in ( 6) is used to calculate the Euclidian distance between two points. D=βˆšβˆ‘ (𝑋 𝑖 βˆ’ π‘Œ 𝑖 ) 2 𝑛 𝑖=1 (6) Support vector machine (SVM) used for classification and regression problems [36]. The goal is to find a hyperplane in an N-dimensional space and separately classify the query data point.…”
Section: Description Of the Classifiersmentioning
confidence: 99%
“…As shown in ( 6) is used to calculate the Euclidian distance between two points. D=βˆšβˆ‘ (𝑋 𝑖 βˆ’ π‘Œ 𝑖 ) 2 𝑛 𝑖=1 (6) Support vector machine (SVM) used for classification and regression problems [36]. The goal is to find a hyperplane in an N-dimensional space and separately classify the query data point.…”
Section: Description Of the Classifiersmentioning
confidence: 99%
“…Features which can be used for tracking head pose are nose,mouth, and face position [18], [14]. There is also approach to use texture as position descriptor [19], [20]. However, tracking requires mandatory initialization of head position.…”
Section: Introductionmentioning
confidence: 99%