2022
DOI: 10.48550/arxiv.2206.11896
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

EventNeRF: Neural Radiance Fields from a Single Colour Event Camera

Abstract: Learning coordinate-based volumetric 3D scene representations such as neural radiance fields (NeRF) has been so far studied assuming RGB or RGB-D images as inputs. At the same time, it is known from the neuroscience literature that human visual system (HVS) is tailored to process asynchronous brightness changes rather than synchronous RGB images, in order to build and continuously update mental 3D representations of the surroundings for navigation and survival. Visual sensors that were inspired by HVS principl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…Such a characteristic has motivated us to approach the task of event-to-video reconstruction by optimizing the INR of video from its temporal derivatives. Previous research has investigated the potential of INR for novel view synthesis using event data [20,24,38]. These approaches reconstruct 3D neural radiance fields using multiple event sequences with known camera poses from a stationary scene.…”
Section: Event Generationmentioning
confidence: 99%
“…Such a characteristic has motivated us to approach the task of event-to-video reconstruction by optimizing the INR of video from its temporal derivatives. Previous research has investigated the potential of INR for novel view synthesis using event data [20,24,38]. These approaches reconstruct 3D neural radiance fields using multiple event sequences with known camera poses from a stationary scene.…”
Section: Event Generationmentioning
confidence: 99%
“…The vast body of existing research in NeRF was investigated based on RGB-based cameras, which suffer from inevitable shortcomings, e.g., low dynamic range and motion blur in unfavourable visual conditions. Thus, recent attention has been paid to the usage of event cameras for NeRF [235], [236], [237]. The first work is EventNeRF [235], which is trained with pure event-based supervision.…”
Section: New Directionsmentioning
confidence: 99%
“…Thus, recent attention has been paid to the usage of event cameras for NeRF [235], [236], [237]. The first work is EventNeRF [235], which is trained with pure event-based supervision. It demonstrates that the NeRF estimation from a single fastmoving event camera in unfavourable scenarios (e.g., fast-moving objects, motion blur, or insufficient lighting) is feasible while frame-based approaches fail.…”
Section: New Directionsmentioning
confidence: 99%