2020
DOI: 10.1145/3414685.3417827
|View full text |Cite
|
Sign up to set email alerts
|

X-Fields

Abstract: We suggest to represent an X-Field ---a set of 2D images taken across different view, time or illumination conditions, i.e., video, lightfield, reflectance fields or combinations thereof---by learning a neural network (NN) to map their view, time or light coordinates to 2D images. Executing this NN at new coordinates results in joint view, time or light interpolation. The key idea to make this workable is a NN that already knows the "basic tricks" of graphics (lighting, 3D projection, occlusion) in a hard-code… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 80 publications
(15 citation statements)
references
References 57 publications
0
15
0
Order By: Relevance
“…Other learning-based solutions apply to specific capture setups, such as a liquid refractive surface [TLY20]. Finally, X-fields [MB20] can handle limited dynamic temporal phenomena, but is restricted to small baseline capture typical to light-fields. Deep learning methods are a very promising avenue for future work; current solutions however do not address the wide-baseline, unsynchronized capture of dynamic phenomena that we target.…”
Section: Multi-video Dense Reconstructionmentioning
confidence: 99%
“…Other learning-based solutions apply to specific capture setups, such as a liquid refractive surface [TLY20]. Finally, X-fields [MB20] can handle limited dynamic temporal phenomena, but is restricted to small baseline capture typical to light-fields. Deep learning methods are a very promising avenue for future work; current solutions however do not address the wide-baseline, unsynchronized capture of dynamic phenomena that we target.…”
Section: Multi-video Dense Reconstructionmentioning
confidence: 99%
“…RGB [15,36,42,51,52,63,95] or RGB-D [17,34,35,55,87,88] solutions have been widely explored. Other methods [4,47,85] model the dynamic scene by neural networks for view synthesis. [4] use a neural network to regress each image from all others to achieve view, time, or light interpolation.…”
Section: Related Workmentioning
confidence: 99%
“…Other methods [4,47,85] model the dynamic scene by neural networks for view synthesis. [4] use a neural network to regress each image from all others to achieve view, time, or light interpolation. [47] use an encoder-decoder network to transfer the 2D images into 3D volume, and leverages volumetric rendering for end-to-end training.…”
Section: Related Workmentioning
confidence: 99%
“…At run‐time, the pre‐computed data is efficiently de‐compressed and interpolated. On the other hand, neural networks have demonstrated their ability to encode radiance distributions [BMSR20, MST * 20], and to efficiently compress and interpolate reflectance data [RJGW19]. In both cases, pre‐computation allows the light transport to be captured and represented in a compact data structure for fast evaluation at run‐time.…”
Section: Introductionmentioning
confidence: 99%