2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
DOI: 10.1109/iros47612.2022.9981775
|View full text |Cite
|
Sign up to set email alerts
|

MMFN: Multi-Modal-Fusion-Net for End-to-End Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…Importantly, our off-road infraction penalty in this track (ORI = 0.0) emphasizes the seamless navigation facilitated by the map. This compares favorably to all methods on both Leaderboard 1 and 2 Track MAP, where some map-based approaches based on TF++ [58] and MMFN [20] also achieve an ORI = 0.0. Mapless Navigation (Leaderboard 1 and 2 Track SENSORS): We evaluated our hybrid CaRINA stack for mapless navigation.…”
Section: Results On Carla Leaderboardsmentioning
confidence: 83%
See 2 more Smart Citations
“…Importantly, our off-road infraction penalty in this track (ORI = 0.0) emphasizes the seamless navigation facilitated by the map. This compares favorably to all methods on both Leaderboard 1 and 2 Track MAP, where some map-based approaches based on TF++ [58] and MMFN [20] also achieve an ORI = 0.0. Mapless Navigation (Leaderboard 1 and 2 Track SENSORS): We evaluated our hybrid CaRINA stack for mapless navigation.…”
Section: Results On Carla Leaderboardsmentioning
confidence: 83%
“…In addition to the presented taxonomy, studies on end-to-end navigation also focus on input representation aspects and model design. This includes considerations in the number of cameras (e.g., single or multi-camera setups) [15][16][17], methods for 3D data representation (e.g., point cloud or Bird's Eye View images) [15,16,18,20], sensor fusion and multimodality (e.g., different sensors and feature fusion methods) [19][20][21], interaction with traffic agents (e.g., interaction graphs or grid maps) [15,20], deep learning technologies (e.g., transformers, graph neural networks, deep reinforcement learning, attention mechanisms, generative models, etc.) [15,16,20,21], decision-making within the network (e.g., high-level commands input or inference) [17,18], and the accuracy or feasibility of the output (e.g., using standard controllers to estimate final outputs or filtering the output of the deep learning model) [15,18,21].…”
Section: End-to-end Autonomous Drivingmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this pipeline suffers from hand-tuned parameters, complex intermediate representations and other drawbacks. To alleviate the above challenges, a fully end-to-end fashion with imitation learning or reinforcement learning has been more and more popular in these years (Liang et al , 2018; Gao et al , 2017; Ma et al , 2020; Zhang et al , 2022), which exploits the potential of learning technologies and can achieve comparable performance with human drivers. Various learning-based end-to-end navigation approaches are presented in the literature (Codevilla et al , 2018; Liang et al , 2018; Gao et al , 2017; Ma et al , 2020; Huang et al , 2021), which directly learn the relationship between the surrounding environment and raw sensor observation data of the vehicle and its control policy, and have provided practical demonstrations of learning-based end-to-end policy trained in real vehicles (Cui et al , 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Nearly all the existing end-to-end navigation methods assume clean sensor data (Huang et al , 2021; Ma et al , 2020; Amini et al , 2019) and only a few works consider sensor noises and challenging weather conditions (Gao et al , 2017; Zhang et al , 2021; Lee et al , 2022; Liu et al , 2022). However, recent researches in the computer-vision community have clearly shown the fragility of deep-learning models in the presence of data distribution shifts (Koh et al , 2021), disturbances and attacks injected in the input data (Wu et al , 2018; Tsai et al , 2020; Zhang et al , 2022) and several demonstrations from the viewpoint of robotics community have also successfully illustrated how to perform attacks on the raw sensor data as well as the high-level planning system to influence the performance of self-driving systems (Zeng et al , 2018; Shen et al , 2021; Shu et al , 2021). Thus, the resilience issue is of great importance for real applications of any vehicle navigation model.…”
Section: Introductionmentioning
confidence: 99%