2004
DOI: 10.1117/12.543549
|View full text |Cite
|
Sign up to set email alerts
|

<title>Infrared and visible image fusion for face recognition</title>

Abstract: Considerable progress has been made in face recognition research over the last decade especially with the development of powerful models of face appearance (i.e., eigenfaces). Despite the variety of approaches and tools studied, however, face recognition is not accurate or robust enough to be deployed in uncontrolled environments. Recently, a number of studies have shown that infrared (IR) imagery offers a promising alternative to visible imagery due to its relative insensitive to illumination changes. However… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
54
0

Year Published

2005
2005
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 102 publications
(54 citation statements)
references
References 20 publications
0
54
0
Order By: Relevance
“…Image fusion of multiple imaging modalities has been performed in various ways in the literature. Singh et al [1] use PCA to decompose images into components for fusion using a genetic algorithm. Fusion is performed by selecting the maximum of the visible and infrared image wavelet coefficients by Li et al [2].…”
Section: Introductionmentioning
confidence: 99%
“…Image fusion of multiple imaging modalities has been performed in various ways in the literature. Singh et al [1] use PCA to decompose images into components for fusion using a genetic algorithm. Fusion is performed by selecting the maximum of the visible and infrared image wavelet coefficients by Li et al [2].…”
Section: Introductionmentioning
confidence: 99%
“…This technique aims at finding the percentage of points "paired" between the concatenated feature pointset of the database and the query images. Two points are considered paired only if the spatial distance (1), the direction distance (2) and the Euclidean distance (3) between the corresponding key descriptors are all within some are within a pre-determined threshold, set with 4 pixels, 30, 6 pixels for r0, O0, ko on the basis of experiments: sd (concat>,concat) =(X Xi)2 + (Y -t)2 <r0 (1) dd(concat', concat,) = min(6! -6t ,3600 -|6 -6t) < 00 kd (concat j,concat i) = (k k)2 < ko (2) concat' and concat, sd is the spatial distance, dd is the direction distance, and kd is the keypoint descriptor distance.…”
Section: B Feature Reduction and Concatenationmentioning
confidence: 99%
“…This results in a fused feature pointset concat=(Slnorm , S2norm, Smnorm mlnorm, m2norm, mmnorm). Feature reduction strategy to eliminate irrelevant features can be applied either before [7] or after [5][6] Redundant features are then removed using the "k-means" clustering techniques [12] on the fused pointset of an individual retaining only the centroid of the points from each cluster. These clusters are formed using spatial and orientation information of a point.…”
Section: B Feature Reduction and Concatenationmentioning
confidence: 99%
See 1 more Smart Citation
“…Other useful examples of sensor level fusion are discussed in [26][27][28], where visible and thermal infrared face images at sensor level are fused. By using IR images in conjunction with visible images, illumination challenges in facial Chapter 2.…”
Section: Sensor-level Fusionmentioning
confidence: 99%