2011 10th IEEE International Symposium on Mixed and Augmented Reality 2011
DOI: 10.1109/ismar.2011.6092382
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive camera-based color mapping for mixed-reality applications

Abstract: We present a novel adaptive color mapping method for virtual objects in mixed-reality environments. In several mixed-reality applications, added virtual objects should be visually indistinguishable from real objects. Recent mixed-reality methods use globalillumination algorithms to approach this goal. However, simulating the light distribution is not enough for visually plausible images. Since the observing camera has its very own transfer function from real-world radiance values to RGB colors, virtual objects… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…Zhang et al (2019) performs a 3D reconstruction using multiple RGB images as input. For the works from Knecht et al (2011); Grosch et al (2007); Kan and Kaufmann (2012); Rohmer et al (2014) and Tuceryan et al (2019), the 3D geometry of the scene was previously known.…”
Section: Analysis Of Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Zhang et al (2019) performs a 3D reconstruction using multiple RGB images as input. For the works from Knecht et al (2011); Grosch et al (2007); Kan and Kaufmann (2012); Rohmer et al (2014) and Tuceryan et al (2019), the 3D geometry of the scene was previously known.…”
Section: Analysis Of Resultsmentioning
confidence: 99%
“…To identify lighting conditions of the real scene, such as the position of the light source, many works, Knecht et al (2011);Grosch et al (2007); Kan and Kaufmann (2013); Franke (2013); Kan and Kaufmann (2012); Rohmer et al (2014) and Gierlinger et al (2010), use cameras with fish-eye lens. In Schwandt and Broll (2016) and Pereira et al (2020) however, the position of the light source was previously known, and in Marques et al (2018) A perceived trend was the use of Differential Rendering by Debevec (2008) as in Kan and Kaufmann (2013); Rohmer et al (2017); Franke (2013); Kan and Kaufmann (2012); Gruber et al (2015) and Gruber et al (2012).…”
Section: Analysis Of Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Regarding the determination of the position and orientation of the hand or the gesture recognition, some works are based on the hardware aspect using a Kinect [11,12], or a flystick [13,14], or other materials, which gives accurate results, but with height cost. Other ones are vision based, using different vision algorithms [8,10].…”
Section: Introductionmentioning
confidence: 99%