2021
DOI: 10.1007/s11432-020-3024-5
|View full text |Cite
|
Sign up to set email alerts
|

Neural compositing for real-time augmented reality rendering in low-frequency lighting environments

Abstract: We present neural compositing, a deep-learning based method for augmented reality rendering, which uses convolutional neural networks to composite rendered layers of a virtual object with a real photograph to emulate shadow and reflection effects. The method starts from estimating the lighting and roughness information from the photograph using neural networks, renders the virtual object with a virtual floor into color, shadow and reflection layers by applying the estimated lighting, and finally refines the re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 41 publications
0
5
0
1
Order By: Relevance
“…The new method of lighting can now use a lighter model to obtain good results from the perspective of rendering. Ma et al [26] proposed Neural Synthesis, a deep learning-based augmented reality rendering method that uses a convolutional neural network to synthesize the rendering layer of the background with the foreground, while simulating shadow and reflection effects to achieve overall harmony. From the perspective of light, Hu et al [27] divided the harmonization into two small tasks: lighting estimation of background images and rendering of foreground objects.…”
Section: End-to-end Light Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…The new method of lighting can now use a lighter model to obtain good results from the perspective of rendering. Ma et al [26] proposed Neural Synthesis, a deep learning-based augmented reality rendering method that uses a convolutional neural network to synthesize the rendering layer of the background with the foreground, while simulating shadow and reflection effects to achieve overall harmony. From the perspective of light, Hu et al [27] divided the harmonization into two small tasks: lighting estimation of background images and rendering of foreground objects.…”
Section: End-to-end Light Modelsmentioning
confidence: 99%
“…Gardner et al [17]; Geoffroy et al [18]; Hung et al [19]; [20]; Garon et al [21]; Gardner et al [22]; Nestmeyer et al [23]; Pandey et al [24]; Inoue et al [25]; Ma et al [26]; Hu et al [27] Light transfer model Light information is additionally extracted and transferred to different images.…”
Section: End-to-end Light Modelmentioning
confidence: 99%
“…This GCM can fully exploit annotated composite images. In recent years, encouraging progress has been made in object placement and image harmonization, and novel solutions for shadow and reflection generations have emerged [12,13,25,26].…”
Section: Related Workmentioning
confidence: 99%
“…These approaches are simple, but obtaining high-quality rendering results can be challenging since a single map cannot appropriately represent spatially varying illumination. Recent learning-based approaches [25], [13], [15], [22] demonstrated that a neural network could estimate spatially varying illumination (not only a single map) from a single RGB image. These learning-based techniques demonstrated impressive results even from a single image, but accurately recovering a complete illumination of a real scene is still challenging since it should predict unobserved illumination using only limited information (e.g., a single image).…”
Section: A Capturing Scene Illuminationmentioning
confidence: 99%