2018
DOI: 10.3390/app8112255
|View full text |Cite
|
Sign up to set email alerts
|

Use of Gradient-Based Shadow Detection for Estimating Environmental Illumination Distribution

Abstract: Environmental illumination information is necessary to achieve a consistent integration of virtual objects in a given image. In this paper, we present a gradient-based shadow detection method for estimating the environmental illumination distribution of a given scene, in which a three-dimensional (3-D) augmented reality (AR) marker, a cubic reference object of a known size, is employed. The geometric elements (the corners and sides) of the AR marker constitute the candidate’s shadow boundary; they are obtained… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…This observation is effective for applications where the spectral power distribution of the illumination (SPD) is similar for both shadowed and non-shadowed regions; thus, the surface color components vary linearly. Approaches based on this consideration are known as color-invariant methods and they are widely used in ADAS applications exploiting different color spaces such as red-green-blue (RGB) [26,27], normalized RGB [28], Hue-Saturation-Intensity (HSI) [29,30], Hue-Saturation-Value (HSV) [11,16,31], Improved-Hue-Saturation-Luminance (IHSL) [32], YUV [33][34][35], c1c2c3 [6], and l1l2l3 [36]. However, in outdoor scenes, the illumination is composed of sunlight and skylight, which have different SPDs.…”
Section: Introductionmentioning
confidence: 99%
“…This observation is effective for applications where the spectral power distribution of the illumination (SPD) is similar for both shadowed and non-shadowed regions; thus, the surface color components vary linearly. Approaches based on this consideration are known as color-invariant methods and they are widely used in ADAS applications exploiting different color spaces such as red-green-blue (RGB) [26,27], normalized RGB [28], Hue-Saturation-Intensity (HSI) [29,30], Hue-Saturation-Value (HSV) [11,16,31], Improved-Hue-Saturation-Luminance (IHSL) [32], YUV [33][34][35], c1c2c3 [6], and l1l2l3 [36]. However, in outdoor scenes, the illumination is composed of sunlight and skylight, which have different SPDs.…”
Section: Introductionmentioning
confidence: 99%
“…In other words, each fingertip acted as a virtual marker to interact with virtual objects at close distances. Despite its usefulness for some AR applications, this approach could be considerably distracting under visual conditions affected by illumination or occlusion [6][7][8]. While recent AR applications tried to recognize visual environments without using a marker, in order to deploy virtual objects in natural ways [9], such attempts required substantial computations of the AR device; thus, their use was limited to a certain AR scenario [10].…”
Section: Introductionmentioning
confidence: 99%
“…Environmental illumination information is used to render CG objects with coherent virtual shadows. Many methods have been introduced for generating the shadows of CG objects according to the light distribution of indoor and outdoor environments, enabling users to have a more realistic AR experience [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. The goal of our study was to estimate the outdoor illumination information for a scene containing a human object.…”
Section: Introductionmentioning
confidence: 99%
“…In previous studies on the estimation of indoor illumination information [6][7][8], it has been important to determine the sampling positions on the shadow surface. Therefore, three-dimensional (3D) model information about the object that casts shadows is required, and then a sampling strategy with which to choose the sample points on the shadow regions according to the object's geometry.…”
Section: Introductionmentioning
confidence: 99%