2021
DOI: 10.48550/arxiv.2111.13679
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images

Abstract: Neural Radiance Fields (NeRF) is a technique for high quality novel view synthesis from a collection of posed input images. Like most view synthesis methods, NeRF uses tonemapped low dynamic range (LDR) as input; these images have been processed by a lossy camera pipeline that smooths detail, clips highlights, and distorts the simple noise distribution of raw sensor data. We modify NeRF to instead train directly on linear raw images, preserving the scene's full dynamic range. By rendering raw output images fro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 42 publications
0
6
0
Order By: Relevance
“…Recent work uses this property to model HDR scenes [27,39] and even estimate a physically-based camera sensor model [50]. We also apply photometric self-calibration in this work, but require a different sensor model that fits to typical X-ray detectors.…”
Section: Implicit Neural Representationsmentioning
confidence: 99%
“…Recent work uses this property to model HDR scenes [27,39] and even estimate a physically-based camera sensor model [50]. We also apply photometric self-calibration in this work, but require a different sensor model that fits to typical X-ray detectors.…”
Section: Implicit Neural Representationsmentioning
confidence: 99%
“…In this subsection, we propose an ablation study to assess the effectivness of different supervision loss instead of the basic L1 loss. We use the same loss function as the one described in [Mildenhall et al 2021], which gives a tone curve 𝜓 (𝑥) = log(𝑥 + 𝜖) which more strongly penalizes errors in dark regions. Results of the ablation study are presented in Table 7.…”
Section: A6 Supervision With Various Loss Functionsmentioning
confidence: 99%
“…Surface representation methods either explicitly represent the scene as point clouds [1,26,44,51,70,75], meshes [4,22,65], or implicitly using signed distance function [11,25,43,63,74]. Volumetric representations on the other hand typically use voxel grids [40,61], octrees [29,77], multi-plane [71,79], implicitly using a neural network [17,30,41] or a coordinate-based network as in NeRF [37] and its variants [3,35,38]. Recently, works such as Vol-SDF [73], NeuS [68] and UNISURF [42] propose to use volumetric rendering methods to extract a surface representation.…”
Section: Neural Scene Representationsmentioning
confidence: 99%