2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS) 2018
DOI: 10.1109/btas.2018.8698604
|View full text |Cite
|
Sign up to set email alerts
|

Learning A Shared Transform Model for Skull to Digital Face Image Matching

Abstract: Human skull identification is an arduous task, traditionally requiring the expertise of forensic artists and anthropologists. This paper is an effort to automate the process of matching skull images to digital face images, thereby establishing an identity of the skeletal remains. In order to achieve this, a novel Shared Transform Model is proposed for learning discriminative representations. The model learns robust features while reducing the intra-class variations between skulls and digital face images. Such … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 16 publications
0
1
0
Order By: Relevance
“…They claimed the use of a publicly available dataset, IdentifyMe, consisting of 464 skull images, along with semi-supervised and unsupervised transform learning models. In order to automate this process in [181], they proposed a Shared Transform Model for learning discriminative representations. The model learns robust features while reducing the intra-class variations between skulls and digital face images.…”
Section: Methodsmentioning
confidence: 99%
“…They claimed the use of a publicly available dataset, IdentifyMe, consisting of 464 skull images, along with semi-supervised and unsupervised transform learning models. In order to automate this process in [181], they proposed a Shared Transform Model for learning discriminative representations. The model learns robust features while reducing the intra-class variations between skulls and digital face images.…”
Section: Methodsmentioning
confidence: 99%