2002
DOI: 10.1016/s0031-3203(01)00163-7
|View full text |Cite
|
Sign up to set email alerts
|

Illumination color covariant locale-based visual object retrieval

Abstract: Search by Object Model -finding an object inside a target image -is a desirable and yet difficult mechanism for querying multimedia data. An added difficulty is that objects can be photographed under different lighting conditions. While human vision has color constancy, an invariant processing, presumably, here we seek only covariant processing and look to recover such lighting change. Making use of feature-consistent locales in an image we develop a scene partition by localization, rather than by image segmen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2003
2003
2021
2021

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 21 publications
0
14
0
Order By: Relevance
“…The eccentricity e is the ratio of maximum to minimum eigenvalues of the 2 × 2 matrix of second-order central moments of the original, 2D curve, an area-based descriptor. In fact, since the matrix is 2D, this can easily be calculated analytically [15]. Circularity c is the ratio of the square of the perimeter to the area, a contour-based parameter.…”
Section: Global Parametersmentioning
confidence: 99%
“…The eccentricity e is the ratio of maximum to minimum eigenvalues of the 2 × 2 matrix of second-order central moments of the original, 2D curve, an area-based descriptor. In fact, since the matrix is 2D, this can easily be calculated analytically [15]. Circularity c is the ratio of the square of the perimeter to the area, a contour-based parameter.…”
Section: Global Parametersmentioning
confidence: 99%
“…Some researchers propose linear color transformation [4] and diagonal color transformation (independent transformation of each RGB channel), which are derived from the physics-based color model. Drew et al [5] propose a voting scheme with pairs of colors of corresponding features to get candidates of diagonal color transformation.…”
Section: Introductionmentioning
confidence: 99%
“…Miller et al [6] propose a method of non-linear color transformation using color eigenflows learned from multiple pairs of images of a same scene under different lighting conditions. These two methods [5][6] need multiple pairs of reference colors for estimating color transformation. It is however difficult for the robot vision system to get multiple reference colors in unknown lighting conditions.…”
Section: Introductionmentioning
confidence: 99%
“…Some researchers propose linear color transformation [6] and diagonal color transformation (independent transformation of each RGB channel), which are derived from the physics-based color model. Drew et al [8] propose a voting scheme with pairs of colors of corresponding features to get candidates of diagonal color transformation.…”
Section: Introductionmentioning
confidence: 99%
“…Miller et al [9] propose a method of non-linear color transformation using color eigenflows learned from multiple pairs of images of a same scene under different lighting conditions. These two methods [8], [9] need multiple pairs of reference colors for estimating color transformation. It is however difficult for the robot vision system to get multiple reference colors in unknown lighting conditions.…”
Section: Introductionmentioning
confidence: 99%