2005
DOI: 10.1117/12.587088
|View full text |Cite
|
Sign up to set email alerts
|

<title>Image retrieval and reversible illumination normalization</title>

Abstract: We propose a novel approach to retrieve similar images from image databases that works in the presence of significant illumination variations. The most common method to compensate for illumination changes is to perform color normalization. The existing approaches to color normalization tend to destroy image content in that they map distinct color values to identical color values in the transformed color space. From the mathematical point of view, the normalization transformation is not reversible.In this paper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…We computed basic image statistics, such as mean luminance, contrast density (root-mean-square contrast), and global energy (see Kirchner & Thorpe, 2006) with Matlab 7.0 (The Mathworks, Natick, MA). In addition, we computed color and texture similarity with local pixel-by-pixel principal component analysis with reversible illumination normalization (see Latecki, Rajagopal, & Gross, 2005).…”
Section: Methodsmentioning
confidence: 99%
“…We computed basic image statistics, such as mean luminance, contrast density (root-mean-square contrast), and global energy (see Kirchner & Thorpe, 2006) with Matlab 7.0 (The Mathworks, Natick, MA). In addition, we computed color and texture similarity with local pixel-by-pixel principal component analysis with reversible illumination normalization (see Latecki, Rajagopal, & Gross, 2005).…”
Section: Methodsmentioning
confidence: 99%
“…To this end, we assessed (a) mean luminance and (b) contrast density (RMS contrast, Bex & Makous, 2002) of each face stimulus by means of Adobe Photoshop. In addition, we assessed (c) color and (d) texture similarity each target (emotional) face and the corresponding context (neutral) face, by implementing a local pixel-by-pixel principal component analysis (PCA) with reversible illumination normalization (see Latecki, Rajagopal, & Gross, 2005). One-way ANOVAs (type of emotional expression) yielded no significant differences in luminance ( p = .61) or RMS contrast ( p = .35) among the emotional faces.…”
Section: Methodsmentioning
confidence: 99%
“…For this, 24 colorimetric invariants in the literature have been tested: greyworld normalization (called greyworld in Figures 13 and 14 ) [ 45 ], RGB-rang [ 46 ], affine normalization (called affine in Figures 13 and 14 ) [ 47 ], intensity normalization (called chromaticity in Figures 13 and 14 ) [ 46 ], comprehensive color normalization (called comprehensive in Figures 13 and 14 ) [ 45 ], c1c2c3 [ 40 , 41 ], m1m2m3 [ 41 ], l1l2l3 [ 40 , 41 ], l4l5l6 [ 48 ], A1A2A3 [ 43 ], c4c5c6 [ 48 ], HSL, MaxRGB [ 46 ], CrCgCb [ 43 ], color constant color indexing (called CCCI in Figures 13 and 14 ) [ 46 ], m4m5m6 [ 43 ], standard L2 (called L2 in Figures 13 and 14 ) [ 43 ], maximum-intensity normalization (called Mintensity in Figures 13 and 14 ) [ 49 ], reduced coordinates [ 50 ], CrCb [ 40 , 43 ], opposite colors (o1o2) [ 40 , 43 ], saturation (S) [ 41 ], log-hue [ 46 ] and hue (H) [ 41 , 50 ]. Figure 3 illustrates the visual difference of some colorimetric invariants applied on the initial image.…”
Section: Image Simplificationmentioning
confidence: 99%