2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
DOI: 10.1109/iros51168.2021.9636327
|View full text |Cite
|
Sign up to set email alerts
|

EVReflex: Dense Time-to-Impact Prediction for Event-based Obstacle Avoidance

Abstract: This paper explores the potential of event cameras to enable continuous time Reinforcement Learning. We formalise this problem where a continuous stream of unsynchronised observations is used to produce a corresponding stream of output actions for the environment. This lack of synchronisation enables greatly enhanced reactivity.We present a method to train on event streams derived from standard RL environments, thereby solving the proposed continuous time RL problem. The CERiL algorithm uses specialised networ… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 19 publications
0
11
0
Order By: Relevance
“…Energy Efficiency [160] Leveraged neuromorphic hardware for energy-efficient and robust obstacle avoidance. Sensor Fusion [161], [162] Combined event cameras with lidar for innovative Time-To-Impact estimation, improving accuracy in dynamic environments. High-Speed Reactive Control [163] Pioneered fast obstacle avoidance suitable for high-speed sensor-based reactive control.…”
Section: Challenge Addressedmentioning
confidence: 99%
“…Energy Efficiency [160] Leveraged neuromorphic hardware for energy-efficient and robust obstacle avoidance. Sensor Fusion [161], [162] Combined event cameras with lidar for innovative Time-To-Impact estimation, improving accuracy in dynamic environments. High-Speed Reactive Control [163] Pioneered fast obstacle avoidance suitable for high-speed sensor-based reactive control.…”
Section: Challenge Addressedmentioning
confidence: 99%
“…Attempts have also been made to directly learn the structure of the scene and the movement of the camera from the event cameras [18]. Another line of research learns dense time-to-collision (TTC) from monocular event sequences [19]. In addition to these perception-focused works, end-to-end learning of control input from event data has enabled complicated control tasks such as UAV navigation [20].…”
Section: B Event Camerasmentioning
confidence: 99%
“…[31] showed how to land a spacecraft using event cameras by computing τ from the divergence of optical flow. [32] fuses information from a depth camera and τ to compute "time-to-impact" which can in-turn be used to dodge dynamic obstacles without prior knowledge of scene geometry or obstacles. In robotics, most methods that perform optical flow based control use initial height estimates either implicitly or explicitly, as remarked in [15].…”
Section: Time-to-contactmentioning
confidence: 99%