2020
DOI: 10.1109/lra.2020.2994027
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic End-to-End Vehicle Navigation in Complex Dynamic Environments with Multimodal Sensor Fusion

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
41
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 54 publications
(41 citation statements)
references
References 18 publications
0
41
0
Order By: Relevance
“…The multi-sensor fusion method was designed based on an interpretable neural network for body sensor networks based on communication and data processing with various sensors and robots in medical human-robot interactions. Cai et al [17] presented an autonomous driving system using end-to-end control for autonomous vehicles in various environmental conditions and dynamic obstacles. For this study, the modeling of cars, pedestrians, motorcyclists, and bicyclists had been conducted based on diverse data from multiple domains such as urban and rural areas, traffic densities, weather, and times of the day from cameras, LiDAR, and radar.…”
Section: Large-scale Sensor Fusion For Real-time Event Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…The multi-sensor fusion method was designed based on an interpretable neural network for body sensor networks based on communication and data processing with various sensors and robots in medical human-robot interactions. Cai et al [17] presented an autonomous driving system using end-to-end control for autonomous vehicles in various environmental conditions and dynamic obstacles. For this study, the modeling of cars, pedestrians, motorcyclists, and bicyclists had been conducted based on diverse data from multiple domains such as urban and rural areas, traffic densities, weather, and times of the day from cameras, LiDAR, and radar.…”
Section: Large-scale Sensor Fusion For Real-time Event Detectionmentioning
confidence: 99%
“…The significant difference between their work and ours is that we proposed image-based deep learning as scalable multi-sensor fusion while using feature-based deep learning. Cai et al [17] mainly focused on probabilistic model building (without considering edge intelligence) with visual data from cameras, LiDAR and radar, and environmental conditions. In contrast, we focus on edge-based event-detection modeling using visual analytics with real-time sensor data.…”
Section: Large-scale Sensor Fusion For Real-time Event Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Developing an effective multi-model fusion method enhances the data efficiency of environment, which improves the perception [120] of artificial agents to cope with dynamic and complex environments. Therefore, a visual DRL navigation agent with a multi-modal sense can learn a better policy [121]. Based on multi-modal information, an artificial agent has good adaptability to dynamic and complex environments, which helps to improve the generalization of navigation model.…”
Section: ) Multi-modal Fusionmentioning
confidence: 99%
“…Sensor detection is undergoing a transition from an independent style to a cooperative style, and sensor networks have found increasing numbers of applications in areas such as in the Internet of Things [1], environmental monitoring [2], cooperative radar detection [3] and autonomous driving [4]. According to the types of communication topologies, sensor networks are classified into centralized networks and distributed networks [5].…”
Section: Introductionmentioning
confidence: 99%